Search the Community
Showing results for tags 'filesystem'.
-
I have an NVME installed that I was planning on using to install all of my VMs and Containers. Which file system type should I use to minimize wear and maximize life? What about filesystems for HDDs that will be general storage? (e.g., media, back ups, playing around)
- 43 replies
-
- storage
- filesystem
-
(and 1 more)
Tagged with:
-
Good Evening, I recently upgraded the ssd in my laptop, and installed linux (nixos) on it, rather than reinstalling windows. I've kept my old ssd (1TB intel optane h10) in an external enclosure, with my old windows partition shrunk down. The remaining space (~650GB) I'd like to allocate to a steam library, however, I'm unsure what filesystem to use for it. The partition will primarily be used for linux, so I'd prefer a linux-native filesystem over ntfs. I've tried exfat, but I'm having problems installing games to it. I saw online steam applies fat32 limits to exfat, so I have to use a different filesystem. My next idea was to format the partition as ext4, then mount it in wsl2 and assign a drive letter to the mountpoint on file explorer. However, as the partition is on the same physical drive as my windows partition, I'm unable to mount it in wsl due to hyper-v limitations. What other options do I have? Thanks.
-
So I've got three new drives on the way to upgrade my server. The old ones are going to get repurposed or sold. My home server is running headless Debian that I've set up and configured the way I like and what I've been doing is: mdadm (RAID 5) -> LUKS encrypted container -> EXT4 filesystem This has worked great. I even converted it from RAID 1 to RAID 5 a few years back while the filesystem was still live and in use, and even forgot and rebooted it during that operation and it just picked back up where it left off. When I had a drive die the process of degrading the array and replacing the dead drive was simple and went without a hitch. The server has two primary jobs, Plex and Nextcloud. The Nextcloud data directory and my Plex media folder both live on the array. It's only ever accessed by a handful of people at once and my home network is just gigabit, so performance isn't the be all end all, but I would like to retain the ability to saturate gigabit networking when transferring large files. However, I'm considering using ZFS when the new drives arrive for the following reasons. - A lot of the features I'm getting through the use of multiple, layered solutions are all available directly thru ZFS itself. Instead of using mdadm for RAID, LUKS for encryption and then ext4 for the filesystem, ZFS would tick all those boxes all by itself. - The one time I did have a drive die while using mdadm, the array was unresponsive until I physically removed the drive. I don't know if this was because of the nature of the failure, or because mdadm wasn't willing to automatically mark the drive as bad and keep going. The failure was of the arm that runs the read write head where you could hear it knocking and almost bouncing inside the drive. Once I removed the drive and marked the array as degraded it worked fine on two drives until the replacement arrived in the mail, but I'm wondering if ZFS would have handled this more gracefully. I do have some concerns though with using ZFS. - I know the "1GB per TB of data" is not a hard and fast rule, rather it's just a rule of thumb for people that enable de-duplication. But I've got 24TB of data right now and will have 36TB of available space, but the system only has 16GB of RAM and can't be upgraded as that's all the motherboard supports. It's an old AM3 socket motherboard from Alvorix that's about 10 years old. Would this be a problem for a system that will be managing the storage AND hosting Plex and Nextcloud at the same time? It's working fine now, but I'm not sure if ZFS would cause issues. - How much of a hit on performance is the compression? Can it be turned off when creating the zpool? The CPU is an old 6 core Phenom II and it works fine now with mdadm and LUKS, but I worry that adding compression to the RAID striping calculations and the encryption might incur a noticeable performance hit. I'm just totally new to ZFS. I've known about it for a while, but have never implemented it myself so I'm trying to decide whether to pull the trigger. Since I'll be creating an entirely new array and migrating the data, if I'm going to make the switch, now is the time. Also, what about BTRFS? Would it be a better solution? I know it supports snapshots, checksums and such, but it doesn't support encryption (yet), which I want, so if I went with it I'd have to layer it beneath LUKS like I'm doing now with EXT4. Would that have any effect on its ability to do checksums or snapshots? I'm basically just looking for some knowledge and advice. I appreciated anything y'all call educate me on.
-
Hello! I’ve got a Raspberry Pi 4 which is configured as an SMB file server. It hosts my Plex Media library and personal documents I don’t want in the cloud. Its used by Windows, macOS, and iOS devices. It’s been working good so far however today I attempted to put a 22GB file and ran into the file size limit for exFAT (16GB). So with that in mind I know there are multiple other Linux file systems. Is there any file system that will give me read/write compatibility with macOS, iOS, and Windows 10 with a larger file size limit?
- 10 replies
-
- raspberrypi
- smb
-
(and 4 more)
Tagged with:
-
I formatted my disk accidently, then I use testdisk to fix it Testdisk ask me to run fsck.ext4 and fix the superblocks, it works, everything is back, but after I reboot, things back to the beginning, files are gone a again. How could I save the result / write the files back to the disk? Or REPAIR the filesystem completely? Thank you!
-
Whenever I boot up the system, I get a white screen that says the system is unrecoverable and to log off and try again. This started right after I attempted an update. When I turn on the system, things like Plex and my network share work but I am unable to access the gui and all the cleanup command that I have run have just lead to errors. most of the errors are related to emacsen-common package. I am kind of a Linux newb so any suggestions are appreciated. Thanks
-
Hello LTT forum I have a bit of a mountainous task ahead of me. I'm asking if there's a way to make if more efficient/organize. Because of the pandemic right now we been scrambling to create an online store for the school and office supply store my family owns. I been tasked to picture the items we have to put on the website we're making. I've been given an excel file of our inventory it contains around 8,000 items (I'm just picturing around thousand or two thousand items), is there a program where it can take an item's name from the excel sheet, then create a folder/file system where I could just drop the pictures I taken? or any alternative way that a program could pair an item name from the excel file to picture/s? Thanks for any points to the right direction
- 4 replies
-
- pictures
- filesystem
-
(and 3 more)
Tagged with:
-
I accidentally set my root folder permissions to 777 when I was creating a directory for a media server drwxrwxrwx 24 root root 4096 Oct 17 18:18 . drwxrwxrwx 24 root root 4096 Oct 17 18:18 .. what is the default and how to I get it back?
-
A while ago I had the bad idea of formatting the RAID 1 of My Book Duo as exFAT. Since then I’ve populated the hard drives almost by 70%, and now exFAT is giving me sometimes headaches, leading me to seek for reformatting the unit with a journaled filesystem. I unfortunately do not have any hard drive to use as second leg so I was looking to exploit disk redundancy of raid but I’m not sure on how to handle the process, as I want not to break the raid. What’s the proper way to do it? 1- remove one of the two disks 2- mount it alone to check if data is readable 3- keep it as data source drive 4- mount old drive. this will be a degraded raid drive. 5- wipe and format disk 2 (the one still inside my external device) to a journaled filesystem 6- move data on the freshly formatted drive (disk 1 or source --> disk 2) 8- rebuild the raid from disk 2 onto the Am I missing something? Since RAID is managed by the box, do I need to take care of RAID meta? Thanks
-
Hi everyone, here comes a probably odd question. I have the need to keep a local mirror of Debian 10 repositories on my hard drive. Due to internet access difficulties and various reasons using the repos out off the internet it’s not an option for me. I managed to mirror the official Repos with debmirror easily but now my question is this: How is it better to keep all of this files on my HHD? The repository it’s about 150Gb and it is made of lots of small files as you might already know. So is it ok for my Hdd to have all of this files there or would it be better to keep them somehow in an big file that contains it all like an .iso, .img or .dng? I’m wandering about this because I don’t want my hard drive to be more prone to fail (it’s already an old drive with more that 5 years of use) and to buy a hard drive in my country right now it’s very expensive so not an option. I would appreciate your help, thanks in advance.
-
Hello LTT Forum! I have never posted here before but I watch LTT constantly and figured maybe someone here could assist me. I have an Ubunut 18.04 server that is used as a Ceph osd server. This is a dedicated supermicro server not running any VM's. It has been doing fine until I noticed the machine does not come back up after a reboot. The OS is installed on a 16GB SataDom and the biggest thing I see on boot is that the /boot/efi drive fails to mount. By fails, I mean it times out after trying for 90s. Shortly after it says I have entered emergency mode and I can click control+D to continue. I was able to hit that and move through. From there I noticed that it did properly mount the /boot/efi but always after emergency mode started and then ended. Today, I could not get it to start for the life of me. I found through fsck that the /dev/sdo1 (which is the /boot/efi) reports a dirty bit being set. I used fsck to clean it up and rebooted. On the way up I saw the same timeout and this time I could not get into the OS at all. I restarted many times and tried going through recovery mode multiple times. It seems the servers load is really high, around 12 according to top, so sometimes it just freezes and does nothing. I am back in now and found that same mount point has a dirty bit again. Since it is just the UEFI filesystem that has the issue everything functions as normal once I am in. I cant leave a production server like this as I have a need to reboot on occasion. Any ideas on what i can try? I have googled a million things, edited my fstab a couple ways, and tried to clean the file system multiple times. I am afraid to reboot now as it can be hours before I get this server online again . I feel like the issues reside in the UEFI boot but I am not a file system expert. Here is the fsck on the dev. I did not make any changes to it this time yet: user@server:~$ sudo fsck /dev/sdo1 fsck from util-linux 2.31.1 fsck.fat 4.1 (2017-01-24) 0x41: Dirty bit is set. Fs was not properly unmounted and some data may be corrupt. 1) Remove dirty bit 2) No action ? 2 There are differences between boot sector and its backup. This is mostly harmless. Differences: (offset:original/backup) 65:01/00 1) Copy original to backup 2) Copy backup to original 3) No action ? 2 Perform changes ? (y/n) n /dev/sdo1: 10 files, 1538/130812 clusters Here is my fstab-This is how it was before the issue and everything worked just fine UUID=1937fe1c-a66f-4cfe-87e8-085da379e16f / ext4 errors=remount-ro 0 0 UUID=6B8B-0009 /boot/efi vfat defaults 0 0 Any help would be appreciated. Thanks, -Vetter
-
So, I'm having a thing right now... I've got four SSDs that I want represented as one volume. There are obviously many ways to configure them, but I don't need parity and I wouldn't be bothered by single drive speeds either. That being said, I do want to have the drives worn somewhat evenly. Or at least, I'd prefer a system where all the load doesn't get slapped onto one drive until it's full and floods over into the other drives. Obviously, I could go with a striped configuration, acting as a software RAID0, but of course, faults with any of the drives will kill the entire array, more or less. So, is there a way of configuring the volume in a way that it stores data somewhat evenly between the drives, but without striping? -- Note that the data being placed on these drives isn't inherently critical and losing it really isn't the worst thing in the world. It's only 2TB worth... But less inconvenience is certainly better. Any ideas? BTW, thanks in advance. I'd be curious to see what the options might be, if any.
-
I have a folder called help\\ and I'm trying to change the name of it so that I can take the files out of it. I can use Windows and Linux.
- 7 replies
-
- filesystem
- directory
-
(and 2 more)
Tagged with:
-
whenever I try to open an image with Windows Photos i get this error: I tried to repair the disk with "chkdsk /f" in command prompt but didnt work
-
After watched Lius's Video on no advets I ordered a Raspberry Pi 3b+ Cana kit I downloaded Raspbian and logged in pi using the putty. I updated Raspbian prior to installing Pi-hole but nothing happened I was NOT presented with a password and username I tried o uninstall and install PI-hole again. again to no use, I thought I'd start from scratch so I formatted my card and flashed the Raspbian Image again using etcher Now after installing Raspbian and making a ssh file, when I put my SD card in Raspberry pi, my angry iP scanner wont scan it
- 8 replies
-
- raspi
- raspberrypi
-
(and 4 more)
Tagged with:
-
Preface Having heard Linus complain about games automatically updating in Steam, I feel compelled to highlight a feature of Microsoft Windows since XP?️ and something that's been around since Ext3 was a thing for open-source operating systems. This works for anything which requires updates for continued operation by an end-user. And you already have these tools in your system. The feature Filesystem links. Many open-source systems had been using hard links for creating faux replacement of library files when a library upgrades, avid users have been using directory junctions and symbolic ("soft") links with remote storage management software to distribute files without creating hard duplicates on the local filesystem and yes, it can even make game modding without destroying or otherwise making useless original content stupid-easy. How it works For files on partition formats which support this feature, the idea is pretty simple; they're shortcuts on steroids. Rather than a launcher to a file, they are the actual file itself being referenced from a different location. This can be leveraged in a variety of ways (use your imagination), but there are some limitations: The partition must support the creation of filesystem links. Between partitions, only soft links can be used. Directories (usually) should not be referenced by a hard link. Soft links contribute to inodes in use (as they are, in effect another file).? Filesystem link support for remote storage applications vary. For soft links, they reference the absolute path of a file. So if the path does not exist, the file is broken. For hard links, they reference the index node of another file. This is why whereas soft links use inodes, hard links do not because they are the file. Also, wh you can delete the original file but your hard links still cue the file you deleted because the index node is not orphaned. Soft links can also be made to a partition format which does not support links in most instances, but not from because the partition format targeted does not support it. So right how does this help with games? In a variety of ways, links can be made to help preserve the original copies of files, but have modified content made available by referencing a game from a different path. It might seem like a pain in the ass to do, but with some work in the command line you can create easy-to-use scripts to execute which will do all of the heavy lifting for you. ...No wait, hang on! Yes, I said scripts, but the software used will help create these scripts rather easily with some basic modifications. I promise it's easy. If this idiot can do it... well, yeah. #teamtrees From hereon, everything referenced is for Microsoft Windows. You open-source nerds should be smart enough to convert these instructions for your variety of Linux, Unix or BSD. To create a forest, one must first plant its seeds. While cliché, it's also true, and as such this maxim should be exercised before any other action takes place. To get your seeds, open cmd and cd into a directory you want to create a duplicate of. Once there, perform tree /F /A > file.txt (name file.txt anything you wish, to any path you want this text file to be at) and open it in your preferred text editor. Growing the forest Its output is... well, a tree of the directory, including files (due to the /F argument) and using readable characters (due to the /A argument). Using find and replace, use context clues to figure out which set of characters to replace with the following commands: mkdir mklink for soft file links mklink /H for hard file links mklink /D for soft directory links mklink /J for hard directory links On Windows, a limitation of how many files can be made in a given partition isn't too much of a concern so really, you should use soft links wherever you can, but only for directories you will not be touching. Having foreknowledge of what directories your modifications will touch is necessary for these future file operations to produce a clean work which does not jeopardize the integrity of a game's original files. So in brief, this is how your script should handle this operation: mkdir modGame cd modGame mklink /D modGame originalGame/originalDir mklink modGame originalGame/orginalFile mklink /D modGame modSrc/modDir mklink modGame modSrc/modFile mkdir modGame/moreDir mklink /D modGame/moreDir originalGame/moreDir/originalDir mklink modGame/moreDir orginalGame/moreDir/originalFile mklink /D modGame/moreDir modSrc/moreDir/modDir mklink modGame/moreDir modSrc/moreDir/modFile And so on With each representing the following: original*: Original game, original game directory or original game file. mod*: Modified game, modified game directory or modified game file. modSrc*: Modification source *Game: Title in question *Dir: A directory / folder *File: A file The two initial commands are for creating a new directory for the modified game. So while some things could be directly linked, such as a game's dependencies or the game itself, other things which your modifications will adjust must be put into their own directories, and should be a mix of the game's original files, with the modified files which would otherwise be overwritten. The above example can be modified to suit; if you intend on just creating the modified game root directory yourself, just have the link commands and create new directories further up the tree as you go. Save your work as a batch (.bat) file, and execute it. If all goes well, your work should yield a modified game using only symbolic and / or hard links. A GUI way If the above is just, like, way too much or you don't feel like spending a weekend doing any of this stuff, then there is a utility for Explorer you can install which will permit creation of filesystem links within the file manager itself. This utility can be found here: https://schinagl.priv.at/nt/hardlinkshellext/linkshellextension.html#download The main caveat being that if you want to share this with your friends, you have to repeat your operations on their machines, whereas with a script your operations can be distributed with simple instructions for them to apply modifications themselves, should you want to share interactive in a multiplayer session where mods are not usually supported in-game. If you want to get into some more advanced stuff using %variables%, then you can further simplify the main script and tell your friends to play with the variables themselves, as well include simple if/else conditionals to stop your friends from being idiots and somehow accidentally using the wrong stuff, but that is beyond the scope of this article. Conclusion The use of filesystem links for game modification can be a bit tedious; Not only do you have to create a file which performs operations for you or use a GUI utility to do this stuff (and replicate your steps on other machines), you also have to manage your links between game updates. This can be as simple as overwriting your mod sources, or as complex as re-writing the entire script from scratch. however the time spent is worthwhile, if you want to modify something guarded by DRM, but not want to also ruin your original content to keep that usable in multiplayer or other guests who don't prefer to use your mods.
- 1 reply
-
- software
- filesystem
-
(and 3 more)
Tagged with:
-
Hi, I really need help, yesterday my PC was making some strange thing but I restarted and then it work ok for a while but when I was finishing it start doing it again I was tired so I decided to just shut down and leave it for today. The thing is tahn today when I tried to turn it on the "aromatic repair" of windows and my PC didn't boot, y tried to use the chkdsk command but it says that it doesn't work on raw volumes. I've tried everything I could think of and I can't reformat the SSD since is my work PC and I haven't backup the data in a while. Searching on internet i found this article: https://www.easeus.com/resource/convert-raw-ntfs.htm but science it talks about hard drives i didn’t want to risk my ssd My PC is a Dell Inspiron 17 7000 2-in-1 Update: Talking with some people i notice that my computer start acting weird after contacting Microsoft office support and the connected to my pc and apparently I am not the first one how's hard drive / SSD fails after Microsoft office support connects to the pc Update 2: i boot my pc using hirens boot utility and it recognize all the partitions of my ssd with all names and capacities but the volume C appears as raw PLEASE HELP ME!!!!
-
Hi guys, this should be rather simple, i dont understand why there are multiple unallocated spaces. I was previously led to beleive they should just merge together, if anyone has a explanation i would really appreciate it. I have a trim ssd with gpt disk format, Kind regards -Kurt101
-
So I have a 8 Gb Micro SD Card which used to work fine, but when i plug it in my computer through card reader it just shows as a drive letter and windows asks me to format it which I did (as I don't care about data in it), but windows was unable to format it. So I checked it through chkdsk from cmd and it says the type of file system is RAW. Then i tried to format that card through Disk Management to change it to FAT32, but what I found from there is it showing the card as 121 Mb storage space only, because of which Disk Management cannot format it to FAT32. So If anyone can help me to recover my card, that would be appreciated, Also I don't care about data in it, and I have attached a few screenshots of errors if that helps. [EDIT] After trying all the methods to format the card, what i think is card has lost the file system, So If anyone of you can tell me how to put a filsystem on it manually, which can work and fix the card.
-
Hello, I have a bit of a problem with ZFS. I created a zpool on with LUKS encrypted drives and now I can't get the filesystem zfs mounted after reboot: Upon importing the zpool, explicitly mounting it or changing certain properties the command just reports I've created a smaller test zpool and tested it for functionality and verified that my setup works without problems (storing data, exporting, importing, re-mounting encrypted drives, after reboots), now I copied all of my data from my mass storage to it and sorted most of it, I still have most of the original data oin the source drive but I sorted, deleted, moved data for half a day and It would be a pain to do all that including creating the zpool, getting UUIDs, writing mount/Unmount scripts and re-downloading lost media files again. What I did in the following order after which the problem occured: Creating Luks-formatted drives Creating mirrored zpool on both drives (automatically mounted zpool to /Data) copying Data from various sources to it sorting some Data with file manager unmounting zpool exporting zpool removing luks encryption mappings (luksClose) reboot mounting encryption mappings (luksOpen) importing zpool noticing problem with mounting zpool scrub checking zpool status and properties (e.g. for canMount flag) Other troubleshooting steps I took: booting Linux iso but failing to load zfs modules after installing them reducing fstab to just mounting root and /boot upgrade of zpool (already up-to-date) checking dmesg (no relevant info found) stracing mount command (problem not found) Current zpool setup/status: # sudo zpool list -v NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT Data 3.62T 2.39T 1.24T - 40% 65% 1.00x ONLINE - mirror 3.62T 2.39T 1.24T - 40% 65% Data1 - - - - - - Data2 - - - - - - # zpool status -v pool: Data state: ONLINE scan: scrub repaired 0 in 5h43m with 0 errors on Mon May 23 05:04:47 2016 config: NAME STATE READ WRITE CKSUM Data ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 Data1 ONLINE 0 0 0 Data2 ONLINE 0 0 0 As you can see there are no signs of corruption, errors or any other reported problems, just the inability of mounting them. As "scrub" and "zdb Data" successfully completes (and latter outputes files) I'm think there's nothing wrong with the zpool and data is readable, I'm probably just doing something wrong, just having no idea what as I could mount zpools without issues on with multiple test zpools with same name and scripts for mounting/unmounting. Maybe one of you can find the problem, here some more information: Setup: 4.5.4-1-ARCH, system specs complete log of what i've done and output of various status commands, zpool history, strace output and package versions: Pastebin Guide I followed (mostly): ZFS on Linux with LUKS encrypted disks | make then make install Any help is appreciated! Edit: Got the Zpool working and mountable under Arch without errors now with downgraded ZFS packages and Linux Kernel (LTS versions). Gonna write a bug report this situation on the Github page of ZFS though. [https://www.reddit.com/r/zfs/comments/4kzksr/zpool_although_healthy_not_mountable/](more info)
-
- filesystem
- zfs
-
(and 2 more)
Tagged with:
-
So I got an old PC and decided to make it a file server. Before I just had a USB hard drive connected to my Asus router. Worked well, but it took a while to show everything on other devices because of the slow hard ware on it. Anyway, I got the PC up and running, put in two hard drives and two USB drives. Currently I am only using one USB and I shared it with the share option with the right click (Windows 7 Pro) but now, I cannot write to it from anything. I've tried two Windows PCs (7 Pro and 8.1 Pro, a Android phone using ES File Viewer and A Linux computer) It says "You need permission to perform this action" I've look through the computer that is sharing it and I haven't found anything to allow me. Help Please :c
-
Heya nerds! Found this picture of the linux filesystem structure, thought it was interesting, then decided to make a wallpaper out of it. Uncompressed (rev_0): https://drive.google.com/file/d/0Bz34OnKxVGu7bklIaUhBakNQOTQ/view?usp=sharing I'm not very good with gimp but I like the way it turned out so...
- 1 reply
-
- linux
- filesystem
-
(and 1 more)
Tagged with:
-
Thinking of building my brother a hackintosh for christmans, but i have a question. I wanna put an ssd, but only like 64, MAYBE 120, but he will just fill it up and get confused. I only want the ssd so it boots quickly, I don't want him to be able to store files on it. On windows i know that you can install the os on the ssd and then relocate stuff like documents, downloads, even the desktop to a hard drive with lots of storage. Could i do a similair thing in os mavericks or yosemite. (also if anyone could check out this build and let me know if will even work with mavericks or yosemite, thanks that would be great) http://pcpartpicker.com/p/BRkq7P
-
- hackintosh
- ssd
-
(and 2 more)
Tagged with:
-
Edit: There's enough info here for this post to start being useful, but it's still not anywhere close to done. Any chance this can get stickied? @wpirobotbuilder @looney @alpenwasser here it is! Thought you guys would be interested in this. If you notice any errors in my information or find important information that is missing or have any questions, please ask! I will be maintaining this thread for as long as I am a forum member. 1 What is the purpose of this tutorial? 3 Why should I use ZFS? ZFS's provides redundancy mainly through software raid with similar RAID levels to other solutions, but what sets ZFS apart is its block-level checksumming and online data integrity checking. Normal redundant hardware and software RAID levels allow for disk failures and replacements, but they fail to check for silent data corruption. Silent data corruption is caused by the occasional read and write error in hard drives, and usually does not get detected until a degraded RAID array fails to rebuild. At that point it is too late to do anything and your supposedly redundant storage solution becomes useless. (To clarify: when this happens, all data is lost). ZFS accounts for silent data corruption by storing a checksum of all used data blocks, which can be checked for integrity at any time through a process known as scrubbing. Scrubbing a ZFS pool (ZFS refers to groups of storage devices as pools, they might otherwise be known as arrays, but they're not quite arrays. I'll get into that later). re-computes the checksum of all used blocks and compares it to the stored checksum. If they do not match, then the corrupted block can be re-constructed using the stripes and/or parity blocks from the other disks. It is pertinent to note that scrubbing can be done during file system use. The FS (file system) does not need to be un-mounted or otherwise taken offline. Another issue with many RAID solutions is the RAID5/6 write hole problem, wherein if power is lost during a write operation, some blocks can be left unwritten or half written and the FS won't know, leaving it with corrupted files. Hardware RAID accounts for this with battery backup, but battery backup is a hacked solution at best as it is useless if the battery is missing, dead, or old. ZFS accounts for the write hole problem by using using copy-on-write, meaning when data is written, it does not overwrite the modified file, but instead writes it to a new block and then adjusts pointers accordingly. The checksums discusses previously also ensure that a block of data matches what was supposed to be written. 3.2 Cost "So if ZFS is redundant AND cheep, it can't be fast, can it? There's no way you can have all three," you must be thinking. Well I'm here to tell you that you thought wrong! Implemented properly, ZFS can be faster than just about all forms of hardware and software RAID. ZFS achieves its speed through caching, logging, deduplication, and compression. 3.3.1 Caching & Logging By default, ZFS will use up to 7/8 if your system memory as a kind of level 1 cache and can be configured to use a fast disk (read: an SSD) as a kind of level 2 cache. In ZFS terms the memory-based cache is know at the ARC, or adaptive replacement cache, and the fast disk based cache is known as the L2 ARC, or level 2 ARC. ZFS also has an optional ZIL, or ZFS Intent Log, a dedicated partition of a fast disk that ZFS uses to store transactions temporarily before they are read or written to the disk in bursts. 3.3.2 Deduplication & Compression ZFS natively supports both deduplication and compression. Deduplication, for those who don't know, allows for copies or near copies of data to be stored as a reference to the original data instead of as a straight copy, which saves space at the cost of a bit of CPU time and a lot of memory. Deuplication and compression can be enabled for an entire ZFS pool, or just a few datasets. I'll get into datasets later, but for now just think of them as folders. 4: Why shouldn't I use ZFS? 4.2 ZOL is Incomplete 4.3 No Defragmentation 4.4 Hardware 5 What do I need to use ZFS? 5.1.2 FreeBSD 5.1.3 Linux 5.1.4 OS X 5.2 Drives 5.2.2 L2 ARC/ZIL Drives 5.3 Memory 5.4 Other Considerations For a low end ZFS server, any cheep consumer motherboard will do, however for mid-range or high end builds, server motherboard should be used for their ECC memory and buffered DIMM support. See section 10 for more info. 5.4.2 CPU 6.1 Installation 6.1.1 Debian 7 6.1.2 Ubuntu 12.04 6.1.3 CentOS 6.x 6.1.4 Solaris 1. do nothing 2. ZFS is native 3. ????? 4. profit 6.2 Basic Setup 6.2.2 Adding a ZIL and L2ARC 6.2.3 Mounting and Unmounting the Pool 7 The Zpool 7.1.2 Compression 7.1.3 Deduplication 7.2 Snapshots 7.3 Scrubbing 7.4 Resilvering 7.5 Pool Monitoring 7.6 Expanding a Zpool 7.7 ZFS Raid Levels 8 Performance Analysis 9.1 ZFS Administration Best Practices 9.2 ZFS Primer 9.3 ZOL Website 9.4 Anandtech SSD Reviews 9.5 ZFSonLinux Setup Guide 10 Sample Setups CPU: Pentium G3220: $ 60mobo: ASUS H81M-K mATX board $ 60memory: 4GB of cheep DDR3 $ 40storage: 2x3TB WD reds $ 260boot drive: 60GB Kingston SSDNow V300 SSD $ 70case: NZXT Source 210 $ 40PSU: Corsair CX 430 $ 45 Throw the 2x3TB drives in a mirror and you've got yourself a nice, low power box with 3TB of available storage. 10.2 Lower-Mid range: $1200, 9TB 10.5 Upper-Mid range: $2800, 24TB
- 20 replies
-
- filesystem
- storage
-
(and 5 more)
Tagged with: