Search the Community
Showing results for tags 'redundancy'.
-
Idk if that belongs here, but I have a question about unRAID and how it handles redundancy. I understand how RAID 5 works, the other disks can combine their data to form a new disk if one fails. But that's striped across the drives. I also understand that unRAID is fundamentally different and does not stripe data (fills disks individually) but I wanna know how some scenarios would work out. Imagine a setup like this on unRAID: The parity disk can be reconstructed since the data is still on the other disks. Disk A, B and C can easily fit on the parity disk. But what if disk D fails? Where is the parity for that one?
- 6 replies
-
- raid
- redundancy
-
(and 1 more)
Tagged with:
-
Hey everyone, I am a little newer to single board computers and I am trying to do a project where I need to store a series of XML files on a single board computer. I want to mirror the data on to a second SD card. Does anyone have suggestions on hats for pi's/arduinos or if there is already a single board computer on the market with multiple SD card slots. Thanks, Shock
- 3 replies
-
- redundancy
- raspberry pi
-
(and 2 more)
Tagged with:
-
In traditional RAID setups, the data is spread out across all drives in the RAID array. If you were running RAID 5 with six drives, for example you would have five drives worth of storage space, with the missing space being for parity data. If you lose one drive, you haven't yet lost all your data. Throw in a new drive, and rebuild your RAID array (and pray you don't run into an unrecoverable read error or silent corruption, because then you can kiss your data good-bye). This has obvious advantages: It is space efficient for the amount of redundancy it provides, and can increase read/write performance with good hardware, since there are multiple drives to run I/O on. However, a single drive going down will kill all of your data. What if you wanted to be able to choose how much overhead you wanted to use for parity calculations? Or what if you wanted a drive loss to not completely kill all of your data? Here's an approach. Here, we have a single giant parity RAID setup. Each color represents pieces of data belonging to a single data chunk (e.g. all the red blocks represent a chunk, spread out over all the drives). This is how traditional RAID works. The proposed "betterRAID" method is to have a fixed parity ratio in a RAID array. For instance, if I want a RAID volume with N parity drive worth of space for every M drives worth of space, then I will have N/(N+M) for my overhead (for N = 1 and M = 4, I have a RAID 5 with five drives, pretty common). However, let me use any number of drives with this setup, and write a given chunk of data to 5 of those drives, then the next chunk to the next 5 drives, and so on, like this: Here, the red data is written across five drives (twice as much data is written to an individual drive) instead of across all ten drives. The orange data gets written to the next five drives, then the green, etc. Notice that if I kill any two drives, I am guaranteed to have 50% of my data survive in the worst case, and 100% of my data survive in the best case. To gain this advantage over traditional RAID 5, I sacrificed one additional drive worth of space (one drive for every five, meaning two drives of the ten are reserved for parity). Obviously, for very large files that span tons of data chunks, they will become corrupted. For smaller files (which can fit inside of a single data chunk), they would survive if the chunk survived, and therefore would be recoverable. Here is a slightly more complicated example. Black lines indicate dead drives. In this case, we write chunks of data across five drives (with 20% of that space used for single-parity), and have 18 drives total in our array. In this case, we can kill two drives, and in the worst case we have lost only 25% of our data. To clarify: A "chunk" is not a complete file. A chunk is just a chunk of data (say, 512KB). If I was writing a 10KB file, it would fit within that chunk, and the next file I wrote might also fit within that chunk. When the chunk is completely full of data, the next one would start to be filled with new incoming data. Writing a multi-gigabyte file would span thousands of chunks. There are obvious upsides to this, most notably the fact that losing more drives than there are parity will not destroy all data, though much of it would likely be corrupted if it spanned many chunks. This also makes disaster recovery a little bit better, ensuring that a failure will not necessarily kill absolutely everything. In addition, if we used dual-parity we could make it even harder to kill data. The downside is that now it is harder to manage the data for an individual file, since you have to find which drives the data lives on. This doesn't provide the same level of protection that dual-parity or triple-parity RAID does. It provides a measure of disaster recovery in case a RAID fails completely. I think it'd be really cool for a software RAID solution like ZFS to implement something like this for RAID Z1, Z2 and Z3.
-
Hi guys i work for a small buisness and as of yet we dont keep any backups or have any redundant storage. We recently suffered a drive faliure on the pc of our office administrator which has led to lots of data being lost and now we are looking into preventing this in future however we are only a small construction buisness with 3 office employees and mainy only store client files, priceings for jobs, invoices etc. and i was hoping for some tips on what the best soloution would be if its worth investing in a NAS or just backing up to a couple of external hard drives every couple of weeks. thanks in advance.
-
Those not interested in the little backstory, please skip to the Bold words, later in the post. Hello from India. I have been using external hardisks to store my Movie and Anime Collection, and a hp pen drive to store my extremely important documents. Little to say, in last 5 years, I have two failed Hardisks, one from Seagate and one from WD, as well as my HP Pen drive (A heartache since it had my notes of written stories, over 200 of them, i lost at least 1/6th or something). I learned my lesson and moved on to Google drive, one drive and others for my documents. However, I recently lost another hardrive of anime. A collection, i took over 3 years to build. Unfortunately NAS was out of budget as drives and the NAS boxes from Synology are too overpriced in India, especially for a student like me. However, after loosing 2 of 5 drives, I have decided to build a NAS to prevent further heartache. However, budget is a constraint. Things I want from my NAS build. 1. Best possible redundancy (I don't want to loose data ever again if possible.) 2. Which OS will provide the best option for No. 1 3. Hardware- upgradable (For longevity of the gig) and storage- scalable on later time (Due to Budget constraints) 4. Plex Server at 1080p max (Since the internet connection is barely 50 mbps to 100 mbps. With monthly data limits of 100 gb most of the time. A higher resolution system doesn't make sense. Even this only make sense since mostly I will be the only user.) 5. Budget friendly - Important - Still a student, so budget is tight. I will be happy with a base system that i can take my time upgrading as I get more money. 6. Good Initial storage size - I currently have 3 working hard disks - a total of 10 tb. Data that is left with me is around 6 tb. So I am hoping for an initial storage capacity of 16 to 20 Tb. Limitations on my end. 1. Mainly Internet connection can only afford that 50 to 100 mbps connection with data cap. 2. Budget as spoke above. Thanks in advance and sorry I have broken any guidelines. First time here.
- 27 replies
-
- redundancy
- budget
-
(and 2 more)
Tagged with:
-
I was researching ways to backup many large media files. I need to be able to export these files fast. I also need it to be reliable and redundant in case I have to go back and retrieve original work.These files consist of multiple wav files, and converting/exporting video formats through Adobe Premier & AE. My system is running an intel 6850k with a msi x99a gmaing pro carbon mobo and 1tb ssd for OS and stuffs. After reading up on the pros and cons of each raid I decided on a plan to use 8 hdds in raid 50, 2 raid 5 arrays with 4 disks each. I've seen an LSI card that fits my needs I think ( I honestly don't know the difference between that and one priced at $300 more). My question is, is this feasible under $1000 with drives and the storage controller included? Is this even the best way to approach the dilemma? What speeds am I gaining over a software raid? Here's the link to the card https://www.newegg.com/Product/Product.aspx?item=N82E16816118217
-
Hi All, I have 2 Identical 2TB drive. I have some data already on one drive but the other one is completely empty. I want to do Raid 1 on my 2nd drive and search the net how to do it. There is Storage Space but it doesn't work then I tried adding mirror on disk management but the option wasn't there. I'm using windows 10 home. If there is more needed information to help me out setting Raid 1 on my system pls let me know. All I want to do is use the 2nd drive for redundancy. If there is another option to do it pls help me out. Thanks
-
In a recent post @SCHISCHKA and I had a disagreement about the importance of redundancy in home NAS applications. His point was mainly that you ought to have a proper backup anyway, so why bother with redundancy. Mine was that drive performance really only matters in niece applications, so RAID 0 is an unnecessary risk, and why not implement a redundancy system, one drive is a small price to avoid the potential issues and work of restoring from backup. So I wanted to hear what you guys think about it, I really like to change my mind due to good arguments, so I am looking forward to hear some
-
Afternoon guys, long time lurker, first time poster. I have 4 desktop machines at home and used to use two mirrored 4tb drives on my main pc as data back up. The drives were set up in storage spaces on Win 10 and I had no real issues until I ran out of room. . I've recently purchased some more 3TB drives (to add to my unused ones) giving me 6 in total. I have put into a new build which is running server 2016. Since reading that an SSD cache cannot be added down the line I've decided to buy x2 240gb ssds to add to the build. My goal is to have the SSDs mirrored for cache with the 6 HDDs set up with parity 2 so that I have a decent amount of protection against recovery. (I've since learned windows requires 7 drives for redundancy 2 so I may have to buy another) I want two drive recovery as I had one 4tb fail in the past and it was too stressful whether the other disk would then fail too although I know logically the chance is very small. I've since tried using the Storage Spaces GUI to set up a pool including tiering before trying to set up the virtual disks but I keep running in to errors and other issues. People who I've spoken to have suggested using Powershell to achieve this but I'm not too well up on this and wondered if anyone had advise for me. I created the pool without the SSDs and the performance was really poor - I was only just getting about 15MB/s of sustained writes, is there a fix for this? Sorry for all of the Noobish questions but help would be greatly appreciated.
-
- storage spaces
- storage pools
-
(and 2 more)
Tagged with:
-
I am looking at various options for redundancy for a future server build I have in mind and I am wondering what would be the best option for parity-based redundancy on an array of hard drives. For simplicity and lower cost, I will consider just single parity for now. The options I am looking at are hardware RAID 5, ZFS' RAID-Z1 and Windows Storage Spaces ReFS Single Parity Pool. As far as I've managed to understand, RAID-Z1 would tend to work better than RAID 5 in terms of speed and solves some of the issue associated. Besides that, I would not need a RAID card for it. However, I was not able to find much detail on how RaidZ1 compares with ReFS Single Parity Pool.
- 3 replies
-
- server
- redundancy
-
(and 4 more)
Tagged with:
-
I am curious what hard drives and SSDs are people using on their home servers/NAS/video surveillance storage and generally for large home data storage. I am also wondering what redundancy techniques and software are people using on those servers/NAS/etc and on their personal devices that they are backing up to their home server. I know this topic is quite wide-ranging, but I felt it would not have made a lot of sense if I broke it in two or three different posts. Server Storage Devices Personally, I've started with some Seagate IronWolf 6TB hard drives and then jumped on Seagate Exos 10TB hard drives. I also have some SATA SSDs (Samsung 860) in the server that I have set in RAID 0 (through software) which I am using them to store and run my virtual machines from them. Server Redundancy Technique (RAID, FileSync-ing, Backups etc) Most of the data I have on the server is personal data, backups and media. I do not have surveillance footage on the server at this time, but I might have it in the future and I would to hear about how people who have that are dealing with it. For personal data (including projects, work, study etc), I keep it on my main laptop, on OneDrive (some 1TB available there and extremely cheap), on the server (syncs from OneDrive) and as backups (also on the same server, but on another drive than the OneDrive folder). For the backups themselves, I don't use any kind of protection (if they're gone, then they're gone, I'll just make new ones) but I do run additional backup plans that store backups locally on my other computers and laptops. So for a system backup, I would have a comprehensive backup plan that backs up the system multiple times per week and then keeps several chains of incremental backups that go back a couple of months and stores those backups on the server. At the same time, I would have a secondary backup plan for the same data that runs only once per week and only keeps one chain of incremental backups going back 4 weeks and stores those backups on the individual device (on a drive other than the system drive being backed up, naturally). For my media library, I wanted initially to go with a RAID 1 approach or some other hardware or software real-time disk mirroring approach, but I ended up using a file syncronisation/folder mirroring software called Bvckup 2. I use it to sync my media library and its backup (located on separate hard drives) every 24 hours (though the software can be used for real-time synchronisation as well). What I love about this software, besides the fact that it does delta copying and that it is very user friendly, is that I was able to set it up so that each time a file gets deleted from the main library, rather than being simply deleted from the backup library as well, the file gets marked for deletion and is moved into an archive where it automatically gets deleted 2 days later (it can be set for any period) unless restored. So new files get backed up no later than 24 hours (or immediately if I were to use real-time sync) and deleted files remain accessible for up to 3 days (2 days minimum plus the period between deleting the file and the daily sync task). Software for Backup of Server and Personal Devices In regards to back-up software, I am using Acronis True Image for my computers, Acronis Backup for my server and virtual machines (I know it's a bad choice, I wanted to use Veeam for my VMs, but it does not support VMWare Workstation, only Esxi) and, for my Android phone I am using a combination of OneDrive for my photos, OneSync for other small files (e.g. configuration files etc) I want on OneDrive (the OneDrive app does not allow automatic upload of files from my phone, except for photos and videos, so that's why I need OneSync) and Resilio Sync (basically Bvckup 2 but peer-to-peer based rather than local/LAN limited) for other backups (though mostly I use Resilio Sync for sync-ing movies between my Server and my phone because I don't want to pay for a Plex subscription for the "priviledge" of downloading files from my own server). What advice would others offer me on how to improve this set-up and what storage hardware and redundancy techniques and software are you using for your servers/NAS?
- 26 replies
-
- server
- media server
-
(and 4 more)
Tagged with:
-
Hi all! I just wanted to run my video editing workflow by you to see if it makes any sense, or if I have to modify my plan for optimal speed/redundancy/backup/archiving. I will be receiving the raw footage on SD cards, which I will then transfer to my high-speed editing RAID (4 SSDs in RAID 0). From here, I will setup hourly backups to an external RAID consisting of high capacity HDDs (likely in RAID 1, or something else that offers decent redundancy). The footage on this RAID will be uploaded to the cloud. When I'm finished with the project, I will remove the footage from my high-speed RAID, and the backup of the footage and the project will remain on the HDD RAID until it can be archived (ex. Project 1 is cloned onto a portable 1TB external drive) and in the cloud permanently. Does this make sense? Do you have any further recommendations? Thank you in advance, Erin Joan
-
Hi. Im working with graphics and my project files are valuable for me, I don't want to loose them. I have a few HDDs and I would like to implement some sort of system to keep my files safe in case of a HDD failure. Where do I start? Do I need a raid? Can I make only one folder to be backuped in another HDD? Thanks for any kind of info.
-
Hi all and merry Christmas, I have go a 100ah telecom backup battery that I have acquired from someone from work. And I was wondering if I could get a module to put on it to use as a ups for my pc I hope you can help, Vincent
- 1 reply
-
- ups
- power supply
-
(and 4 more)
Tagged with:
-
Hey all, Never actually handled a server before (at least, not an important one) and between my various options for I'm getting a little overwhelmed and could do with some advice. I plan on building a NAS for a small business, it will primarily store video and our current storage needs aren't huge but footage has to be accessible from the network and data integrity is a big concern. I plan on buying 3x 2TB drives, 2 to put into a RAID 1 array, and 1 to fit to another PC in a different building to back up data nightly. Ideally I'm doing all this from within FreeNAS either with an older prebuilt machine or a small new build. The main problems I can't quite wrap my head around right now are: 1) The business is located in a premises which experiences short lived blackouts anywhere from once a week to three times a day, always between 9-5. I'm considering a hardware RAID card with a battery, or perhaps a full battery backup for the server as a remedy to this - but I really don't understand how hardware RAID cards operate when compared to, say, a software RAID with ZFS, and definitely wouldn't be sure what I'd be looking for if I were to purchase one. Consequently I'm sure how to protect against RAID write holes when the power outs inevitably come.. What would you recommend? 2) If I choose to go with a software RAID, especially with ZFS, there's a lot of conflicting information over the use of ECC or non-ECC RAM. I understand the concept behind ECC RAM but I've never used it and amn't entirely convinced if it's going to be worthwhile for this workload. At the moment no more than 3 people would be working on the server at once and so it's not an incredibly strenuous job, even if the RAID array adds to the load. We also will be upgrading the server later on down the line and could switch to ECC RAM when cashflow is a little less tight. Is ECC RAM that important to start with? 3) The boss really likes shiny new things, and although I'm convinced that a pre-built machine off of eBay with some new hard drives fitted to it would do a fine job maybe even with a battery backup or battery-fitted hardware RAID card in order to keep things peachy I've been planning on building a little box in a BitFenix Nova with an FX-4300 a lil powersupply 8GBs of RAM and a mobo with enough SATA ports and RAM ports to provide easy upgrades in the future (presuming that we're not using ECC RAM here) - am I missing anything here? While the budget is kinda set at "beneath 400 euro please" there is flexibility and if there's anything I should be ensuring to save trouble down the line I'd happily invest now. Thanks yall!! Edit: Sorry yall, wasn't very clear: The blackouts are going to go away in a short/medium time and convincing boss to fix the problem/buy a UPS unfortunately isn't feasible rn. Protection against write holes would be very useful in case the problem comes back farther down the line or for mere days of operation where it may continue to be an issue while the server is operational
- 10 replies
-
I don't know if I came to the wrong place. But every other forum out there just kinda sucks. So right now, I'm running a WISP network with multiple P2P Bridges feeding towers for redundancy. I am using STP on tough switches (going to be switching them to Netonix) for the redundancy portion of this. To reduce overhead, Im thinking about putting my bridged links in router mode, and routing the traffic through, forcing packets to use a link based on the route in the routers, but then that would require me to reroute data if a link went down. Could I use BGP to handle the weight of an IP and where it goes, then if it fails have it pick the other link? Or should I just stick with STP and bridge mode and call it a day? Side note, I would just use Vlans on the tough switches along with cost and priority to achieve less overhead, but Tough switches kind of suck when it comes to CPU and memory. When I tried that, my throughput was cut in 3/4ths ... like 30 Mb on a 100 Mb link.
-
I've had my WD 500GB external drive for a number of years now (since around 2013) and it's almost full. Over the years the drive has accumulated many memories, personal files, movie backups, installers, basically a bunch of things that it would be a real pain to lose. A few years back a Windows 8 preview build corrupted the drive, which I was able to partially repair on my ancient XP machine, and using Linux to finish recovering everything to my laptop's 750GB drive. I managed to get the recovered files back onto the hard drive and it's been working flawlessly since. Lately I haven't been using the drive much, as the thought of a head crash causing data loss sounds like a risk I'd like to very much avoid. So now I'm looking for an external RAID 1 enclosure to allow me to have the data stored on 2 drives at a time, with the added convenience of portability. The enclosure needs to somehow let me know when a drive fails so I have enough time to get a new one. I found some below but could do with some advice from y'all. Thanks in advance
- 3 replies
-
- portable storage
- hard disk
-
(and 3 more)
Tagged with:
-
So i'm very new to raid as you can probably guess from the title and thought here would be the best place to ask. so my current situation is that I have a 1tb boot drive and another 2tb drive I brought later to go with that (both are 7200rpm) so my question is if I brought this: http://www.scan.co.uk/products/highpoint-rocket-640l-lite-4-internal-port-hdd-controller raid card and a 3tb drive would it be possible to do a raid 1 with the 1 and 2tb drives mirrored onto the 3tb and if so can this be done without data loss on the 1 and 2tb drives and if this is not possible what would be the best way for me to do data redundancy without loss of my current data thanks in advance
-
Hey, I'm looking at creating a RAID array however I've never had any experience with this before. I'm looking at creating a RAID 10 as I will primarily be video editing -- don't worry, I have an SSD for a scratch disk, I would just like a RAID array so if I ever have to pull footage in the future and work with it, I'm not copying it back to my SSD then deleting it when I'm done. What's your advice? What's the point of a RAID controller - is it worth getting a RAID controller even if my mobo supports the RAID type I want to build? How do you do it, and what are some things to look out for? Thanks for everyone's help, I'd rather not lose any data haha. Chris
- 10 replies
-
If so please let me know. I really just want to have a better back up solution than the one that comes with windows. Linus should really do a vid about this unless there is one already.
-
- techquickie
- backup
-
(and 8 more)
Tagged with:
-
Lets say I have 2 drives in a RAID 1 array, and one of the drives and the raid controller dies. Would I be able to recover all of my data from the raid array by taking the drive that is still working from that computer and simply connecting it to a different computer and accessing it in an operating system environment? The reason I'm unsure about this is because I know that with more complex levels of RAID, the raid controller dying means the data wont be accessible by just plugging all those drives into a different PC, however due to the nature of RAID 1 (it being just a mirror) I don't see how this would be an issue. Thanks
- 6 replies
-
- raid
- reliability
- (and 7 more)
-
How would I go about cloning a drive from one to another?
Vitalius posted a topic in Storage Devices
Hey guys, I was wondering what are the in Linux options for cloning drives? Basically, I need to move everything from a failing drive to a new one (my companies server stuff) and I need it to basically clone the drive as I need all the permissions and metadata to stay the same. cp/copy wouldn't do this as it wouldn't retain the permissions/metadata. mv/move wouldn't work as it deletes it from the old drive (we need the equivalent of copy but keeping the permissions/metadata). Also, is there a way to do this outside of Linux? For example, is there a type of software that Windows can run that can clone a drive no matter what file system it uses? By simply reading the drive as simple 1's and 0's and copy verbatim? Thanks, Vitalius -
Hi guys, Long story short, we had a RAID 5 array. If you went into Disk Management on Windows 2k8 server, two of the drives had the (!) Yellow exclamation point on them. One said Online. The other said Errors. The RAID drives all had "Failed Redundancy" on them. So, when we tried to replace the drive that had "Errors" on it, the thing didn't rebuild. When we tried to stick the drive with errors back in to try and get the system up and running temporarily, it didn't like that. Not one bit. So now we are in emergency recovery mode. Can someone shed some light on what those exclamation points really mean, and why the RAID failed right after we took a single drive out that had Errors on it? Whatever that means. FYI, this is a picture I got from the Disk Management page last Thursday: So yeah. Any help in understanding why it suddenly failed would be great. If it failed because we pulled the wrong drive, it would have fixed itself when we put it back. It didn't. If it failed because the other drive failed (the one with the (!) but still online), then I could understand it. But ... that timing. It just seems too convenient. Any help is appreciated. Thanks, Vitalius.
-
Edit: There's enough info here for this post to start being useful, but it's still not anywhere close to done. Any chance this can get stickied? @wpirobotbuilder @looney @alpenwasser here it is! Thought you guys would be interested in this. If you notice any errors in my information or find important information that is missing or have any questions, please ask! I will be maintaining this thread for as long as I am a forum member. 1 What is the purpose of this tutorial? 3 Why should I use ZFS? ZFS's provides redundancy mainly through software raid with similar RAID levels to other solutions, but what sets ZFS apart is its block-level checksumming and online data integrity checking. Normal redundant hardware and software RAID levels allow for disk failures and replacements, but they fail to check for silent data corruption. Silent data corruption is caused by the occasional read and write error in hard drives, and usually does not get detected until a degraded RAID array fails to rebuild. At that point it is too late to do anything and your supposedly redundant storage solution becomes useless. (To clarify: when this happens, all data is lost). ZFS accounts for silent data corruption by storing a checksum of all used data blocks, which can be checked for integrity at any time through a process known as scrubbing. Scrubbing a ZFS pool (ZFS refers to groups of storage devices as pools, they might otherwise be known as arrays, but they're not quite arrays. I'll get into that later). re-computes the checksum of all used blocks and compares it to the stored checksum. If they do not match, then the corrupted block can be re-constructed using the stripes and/or parity blocks from the other disks. It is pertinent to note that scrubbing can be done during file system use. The FS (file system) does not need to be un-mounted or otherwise taken offline. Another issue with many RAID solutions is the RAID5/6 write hole problem, wherein if power is lost during a write operation, some blocks can be left unwritten or half written and the FS won't know, leaving it with corrupted files. Hardware RAID accounts for this with battery backup, but battery backup is a hacked solution at best as it is useless if the battery is missing, dead, or old. ZFS accounts for the write hole problem by using using copy-on-write, meaning when data is written, it does not overwrite the modified file, but instead writes it to a new block and then adjusts pointers accordingly. The checksums discusses previously also ensure that a block of data matches what was supposed to be written. 3.2 Cost "So if ZFS is redundant AND cheep, it can't be fast, can it? There's no way you can have all three," you must be thinking. Well I'm here to tell you that you thought wrong! Implemented properly, ZFS can be faster than just about all forms of hardware and software RAID. ZFS achieves its speed through caching, logging, deduplication, and compression. 3.3.1 Caching & Logging By default, ZFS will use up to 7/8 if your system memory as a kind of level 1 cache and can be configured to use a fast disk (read: an SSD) as a kind of level 2 cache. In ZFS terms the memory-based cache is know at the ARC, or adaptive replacement cache, and the fast disk based cache is known as the L2 ARC, or level 2 ARC. ZFS also has an optional ZIL, or ZFS Intent Log, a dedicated partition of a fast disk that ZFS uses to store transactions temporarily before they are read or written to the disk in bursts. 3.3.2 Deduplication & Compression ZFS natively supports both deduplication and compression. Deduplication, for those who don't know, allows for copies or near copies of data to be stored as a reference to the original data instead of as a straight copy, which saves space at the cost of a bit of CPU time and a lot of memory. Deuplication and compression can be enabled for an entire ZFS pool, or just a few datasets. I'll get into datasets later, but for now just think of them as folders. 4: Why shouldn't I use ZFS? 4.2 ZOL is Incomplete 4.3 No Defragmentation 4.4 Hardware 5 What do I need to use ZFS? 5.1.2 FreeBSD 5.1.3 Linux 5.1.4 OS X 5.2 Drives 5.2.2 L2 ARC/ZIL Drives 5.3 Memory 5.4 Other Considerations For a low end ZFS server, any cheep consumer motherboard will do, however for mid-range or high end builds, server motherboard should be used for their ECC memory and buffered DIMM support. See section 10 for more info. 5.4.2 CPU 6.1 Installation 6.1.1 Debian 7 6.1.2 Ubuntu 12.04 6.1.3 CentOS 6.x 6.1.4 Solaris 1. do nothing 2. ZFS is native 3. ????? 4. profit 6.2 Basic Setup 6.2.2 Adding a ZIL and L2ARC 6.2.3 Mounting and Unmounting the Pool 7 The Zpool 7.1.2 Compression 7.1.3 Deduplication 7.2 Snapshots 7.3 Scrubbing 7.4 Resilvering 7.5 Pool Monitoring 7.6 Expanding a Zpool 7.7 ZFS Raid Levels 8 Performance Analysis 9.1 ZFS Administration Best Practices 9.2 ZFS Primer 9.3 ZOL Website 9.4 Anandtech SSD Reviews 9.5 ZFSonLinux Setup Guide 10 Sample Setups CPU: Pentium G3220: $ 60mobo: ASUS H81M-K mATX board $ 60memory: 4GB of cheep DDR3 $ 40storage: 2x3TB WD reds $ 260boot drive: 60GB Kingston SSDNow V300 SSD $ 70case: NZXT Source 210 $ 40PSU: Corsair CX 430 $ 45 Throw the 2x3TB drives in a mirror and you've got yourself a nice, low power box with 3TB of available storage. 10.2 Lower-Mid range: $1200, 9TB 10.5 Upper-Mid range: $2800, 24TB
- 20 replies
-
- filesystem
- storage
-
(and 5 more)
Tagged with:
-
So a drive in my drive pool became marked as disconnected (pictured below as the Hitachi 120GB), and my storage space became locked down earlier today. The drive did not appear in device manager of this machine, nor on another machine with windows 7, however was visible in the bios of both motherboards. Since then I've thrown a 2TB Barracuda (pictured below as Something Borrowed) into the same port as the Hitachi, with the intention of Storage Spaces rebuilding itself. When trying to remove the Hitachi from the pool, windows says it cannot, after the first attempt at doing this, Windows marked the drive as retired, instead of disconnected. this is odd, because there is more than enough capacity, especially with the additional 2TB drive that I thought would replace the 120GB drive The motherboard only has 8 sata ports, including the one being used for the OS disk, and I don't really intend on buying a controller card for a while. Any ideas on how I can get the storage space back up??
- 5 replies
-
- windows 8
- storage spaces
-
(and 6 more)
Tagged with: