Search the Community
Showing results for tags 'mdadm'.
-
Hi so yesterday I added a drive to my raid array. I have a 3 disk raid 5 raid array and so I added a fourth. But now im stuck on where do I go from now. I started by growing the raid which took around 16h and now I am wondering how do I grow the partition im using for data without deleting anything on it. Lsblk gives me that as a result for my raid array. sda 8:0 0 3.7T 0 disk └─md0 9:0 0 10.9T 0 raid5 └─md0p1 259:1 0 7.3T 0 part /mnt/media sdb 8:16 0 3.7T 0 disk └─md0 9:0 0 10.9T 0 raid5 └─md0p1 259:1 0 7.3T 0 part /mnt/media sdc 8:32 0 3.7T 0 disk └─sdc1 8:33 0 3.7T 0 part └─md0 9:0 0 10.9T 0 raid5 └─md0p1 259:1 0 7.3T 0 part /mnt/media sdd 8:48 0 3.7T 0 disk └─sdd1 8:49 0 3.7T 0 part └─md0 9:0 0 10.9T 0 raid5 └─md0p1 259:1 0 7.3T 0 part /mnt/media I tried to use resize2fs but it tells me that it dosnt need to do anything since its already using the max ammount of block.
-
I would like to know which is best for performance. MDADM mirror or ZFS Mirror. There will be 2 drive pools, each containing 2 matching disks. A single drive on its own I can get easily 100~MB/s (reads) over the network. I setup a freenas zfs pool not really knowing what i was up against and i got 60MB/s (reads) over the network. I was hoping for better performance. Senerio: 2x 4tb seagate nas drives 2x 2tb wd reds Intel G1830 (dual core @2.80 GHz) 8GB of ram 1Gb Ethernet link (might be upgraded if there is a need) Drives are on the intel chipset sata ports but i might buy a lsi based hba and flash it. Thanks for your help!
-
down votefavorite As ZFS is not perfectly workable with my wish, I looked into mdadm + LVM, it seems to all work out but I would like to get external advice/confirmation. The situation : now using 3x4TB RAID5 using MDADM, filled at 5.5TB, forecast to grow shortly and put pressure to extend. extra disks available 1x1TB, 1x2TB, 2x3TB the NAS (Ubuntu 16.10, CPU:i3-2120T, 16GB RAM) is used in home environment and we have limited computing needs, mostly want enough space to have all medias in one place for the Plex server (running on the same machine) and just enough power to manage that and the couple of shared files beside that. There is as well Crashplan running. 4 SATA ports on the motherboard (used up by the 3x4TB and SSD for OS) and one PCIe x4 to 4x SATA3.0 card on its way to add more SATA ports. The plan : (4TB + 4TB + 4TB) RAID5 => /dev/md0 (already in place) (1TB + 2TB) RAID0 => /dev/md1 (2TB disk would then be splitted in two 1TB partitions to allow the RAID0) (3TB + 3TB + /dev/md1) RAID5 => /dev/md2 Then create LVM pool on /dev/md2 copy all data from /dev/md0 to /dev/md2 (I can move a couple hundred GB to the laptop or external HDD if /dev/md2 is too small). format /dev/md0 add /dev/md0 in the LVM group enjoy my 14TB LVM + RAID5 array (give or take some overhead) now come the questions Is that possible at all? what are the risks of such implementation? I see here both /dev/md0 and /dev/md2 being 1 disk fault tolerant, /dev/md1 being just a cheap way to avoid buying a new disk so I do not see much of an increased risk compared with the existing situation (except the fact that I have now 2 arrays). Any specific care for the migration process? Performance-wise, what is the expected overhead, if any, compared to my existing 3x4TB RAID5 (mdadm) setup? How difficult would that be to replace /dev/md1 by a physical 3TB disk later down the road? How portable is that storage pool? (I will surely replace sooner or later the Motherboard+CPU+RAM when I go for a 4k TV ) So, will that be possible to move all those disks to a new system? Concerning the expansion, if I just add a 4TB drive in /dev/md0, will LVM handles this properly and gives me the extra 4TB in the pool? My plan is rather to either replace one of the arrays by one with bigger disks or to add another 3 disks array in the pool (physical space will then be the concern). Thanks already
-
Hi everyone! I have a home server running Ubuntu Server 20.04.1 LTS, mainly as a Samba share and some other stuff. For a great part of the Samba share I've been running a RAID 0 made with 3 old HDDs, coming to 1.8TB of storage, but recently I changed one of the drives by removing the array, formatting the drives, etc. and building a new one. I use this array to put backups on it with Aomei Backupper. The problem is that now Backupper gives me very frecuent errors of not being able to write the file, and effectively when I try to write anything to that share Windows gives me a read-only error and I have to reset the server. Also, one time I tried to copy a big file to test this problem and Windows copied like 1% of it and the dropped to 0MB/s for about 5 minutes, returning to 15 or 20 MB/s (normal is 120MB/s for my connection) for a few minutes and dropping again, very strange behaviour nad it only affected that specific share where the RAID was mounted. I made a skdump of the drives that I'll attach to the post, and every one of them has an "Overall status: BAD_SECTOR" so I tried to sudo badblocks -v /dev/sdx > badsectorsx.txt for every single drive and got the results in 3 .txt files, but now I cannot perform a fsck or e2fsck on each drive because it does nothing and trying to fsck the md device gives me this server@server:~$ sudo fsck -p /dev/md127 fsck from util-linux 2.34 /dev/md127: clean, 704/122077184 files, 164898792/488284800 blocks What can I do to repair those bad sectors or to make the array work OK again? Also, I just noticed that the filesystem is reporting to have 700GB occupied when I barely have anything on that folders! Here are the skdumps: server@server:~$ sudo skdump /dev/sda Device: sat16:/dev/sda Type: 16 Byte SCSI ATA SAT Passthru Size: 476940 MiB Model: [WDC WD5000AAKX-00ERMA0] Serial: [WD-WCC2EDE45999] Firmware: [15.01H15] SMART Available: yes Quirks: Awake: yes SMART Disk Health Good: yes Off-line Data Collection Status: [Off-line data collection activity was completed without error.] Total Time To Complete Off-Line Data Collection: 8400 s Self-Test Execution Status: [The previous self-test routine completed without error or no self-test has ever been run.] Percent Self-Test Remaining: 0% Conveyance Self-Test Available: yes Short/Extended Self-Test Available: yes Start Self-Test Available: yes Abort Self-Test Available: yes Short Self-Test Polling Time: 2 min Extended Self-Test Polling Time: 85 min Conveyance Self-Test Polling Time: 5 min Bad Sectors: 10 sectors Powered On: 2.9 years Power Cycles: 4131 Average Powered On Per Power Cycle: 6.2 h Temperature: 39.0 C Attribute Parsing Verification: Good Overall Status: BAD_SECTOR ID# Name Value Worst Thres Pretty Raw Type Updates Good Good/Past 1 raw-read-error-rate 200 200 51 2 0x020000000000 prefail online yes yes 3 spin-up-time 184 140 21 1.8 s 0xff0600000000 prefail online yes yes 4 start-stop-count 96 96 0 4170 0x4a1000000000 old-age online n/a n/a 5 reallocated-sector-count 200 200 140 0 sectors 0x000000000000 prefail online yes yes 7 seek-error-rate 200 200 0 0 0x000000000000 old-age online n/a n/a 9 power-on-hours 65 65 0 2.9 years 0x366400000000 old-age online n/a n/a 10 spin-retry-count 100 100 0 0 0x000000000000 old-age online n/a n/a 11 calibration-retry-count 100 100 0 0 0x000000000000 old-age online n/a n/a 12 power-cycle-count 96 96 0 4131 0x231000000000 old-age online n/a n/a 192 power-off-retract-count 199 199 0 949 0xb50300000000 old-age online n/a n/a 193 load-cycle-count 199 199 0 3257 0xb90c00000000 old-age online n/a n/a 194 temperature-celsius-2 104 92 0 39.0 C 0x270000000000 old-age online n/a n/a 196 reallocated-event-count 200 200 0 0 0x000000000000 old-age online n/a n/a 197 current-pending-sector 200 200 0 10 sectors 0x0a0000000000 old-age online n/a n/a 198 offline-uncorrectable 200 200 0 0 sectors 0x000000000000 old-age offline n/a n/a 199 udma-crc-error-count 200 200 0 0 0x000000000000 old-age online n/a n/a 200 multi-zone-error-rate 200 200 0 0 0x000000000000 old-age offline n/a n/a server@server:~$ sudo skdump /dev/sdb Device: sat16:/dev/sdb Type: 16 Byte SCSI ATA SAT Passthru Size: 476940 MiB Model: [TOSHIBA DT01ACA050] Serial: [745GDZEAS] Firmware: [MS1OA750] SMART Available: yes Quirks: Awake: yes SMART Disk Health Good: yes Off-line Data Collection Status: [Off-line data collection activity was suspended by an interrupting command from host.] Total Time To Complete Off-Line Data Collection: 3571 s Self-Test Execution Status: [The previous self-test routine completed without error or no self-test has ever been run.] Percent Self-Test Remaining: 0% Conveyance Self-Test Available: no Short/Extended Self-Test Available: yes Start Self-Test Available: yes Abort Self-Test Available: yes Short Self-Test Polling Time: 1 min Extended Self-Test Polling Time: 60 min Conveyance Self-Test Polling Time: 0 min Bad Sectors: 27 sectors Powered On: 1.8 years Power Cycles: 862 Average Powered On Per Power Cycle: 18.0 h Temperature: 39.0 C Attribute Parsing Verification: Good Overall Status: BAD_SECTOR ID# Name Value Worst Thres Pretty Raw Type Updates Good Good/Past 1 raw-read-error-rate 100 100 16 0 0x000000000000 prefail online yes yes 2 throughput-performance 142 142 54 n/a 0x470000000000 prefail offline yes yes 3 spin-up-time 131 131 24 168 ms 0xa800b6000300 prefail online yes yes 4 start-stop-count 100 100 0 2317 0x0d0900000000 old-age online n/a n/a 5 reallocated-sector-count 100 100 5 27 sectors 0x1b0000000000 prefail online yes yes 7 seek-error-rate 100 100 67 0 0x000000000000 prefail online yes yes 8 seek-time-performance 115 115 20 n/a 0x220000000000 prefail offline yes yes 9 power-on-hours 98 98 0 1.8 years 0x713c00000000 old-age online n/a n/a 10 spin-retry-count 100 100 60 0 0x000000000000 prefail online yes yes 12 power-cycle-count 100 100 0 862 0x5e0300000000 old-age online n/a n/a 192 power-off-retract-count 98 98 0 2448 0x900900000000 old-age online n/a n/a 193 load-cycle-count 98 98 0 2448 0x900900000000 old-age online n/a n/a 194 temperature-celsius-2 153 153 0 39.0 C 0x27000e002e00 old-age online n/a n/a 196 reallocated-event-count 100 100 0 35 0x230000000000 old-age online n/a n/a 197 current-pending-sector 100 100 0 0 sectors 0x000000000000 old-age online n/a n/a 198 offline-uncorrectable 100 100 0 0 sectors 0x000000000000 old-age offline n/a n/a 199 udma-crc-error-count 200 200 0 0 0x000000000000 old-age online n/a n/a server@server:~$ sudo skdump /dev/sdd Device: sat16:/dev/sdd Type: 16 Byte SCSI ATA SAT Passthru Size: 953869 MiB Model: [Hitachi HDS721010CLA332] Serial: [JP2940HZ2B30JC] Firmware: [JP4OA3GH] SMART Available: yes Quirks: Awake: yes SMART Disk Health Good: yes Off-line Data Collection Status: [Off-line data collection activity was suspended by an interrupting command from host.] Total Time To Complete Off-Line Data Collection: 9455 s Self-Test Execution Status: [The previous self-test routine completed without error or no self-test has ever been run.] Percent Self-Test Remaining: 0% Conveyance Self-Test Available: no Short/Extended Self-Test Available: yes Start Self-Test Available: yes Abort Self-Test Available: yes Short Self-Test Polling Time: 2 min Extended Self-Test Polling Time: 158 min Conveyance Self-Test Polling Time: 0 min Bad Sectors: 1310965 sectors Powered On: 5.6 years Power Cycles: 9861 Average Powered On Per Power Cycle: 5.0 h Temperature: 43.0 C Attribute Parsing Verification: Good Overall Status: BAD_SECTOR ID# Name Value Worst Thres Pretty Raw Type Updates Good Good/Past 1 raw-read-error-rate 100 85 16 0 0x000000000000 prefail online yes yes 2 throughput-performance 137 100 54 n/a 0x5a0000000000 prefail online yes yes 3 spin-up-time 137 100 24 250 ms 0xfa002e010600 prefail online yes yes 4 start-stop-count 97 97 0 14109 0x1d3700000000 old-age online n/a n/a 5 reallocated-sector-count 96 47 5 1310740 sectors 0x140014000000 prefail online yes yes 7 seek-error-rate 100 100 67 0 0x000000000000 prefail online yes yes 8 seek-time-performance 140 100 20 n/a 0x1e0000000000 prefail offline yes yes 9 power-on-hours 93 93 0 5.6 years 0xf9bf00000000 old-age online n/a n/a 10 spin-retry-count 100 100 60 0 0x000000000000 prefail online yes yes 12 power-cycle-count 98 98 0 9861 0x852600000000 old-age online n/a n/a 183 runtime-bad-block-total 100 100 0 0 0x000000000000 old-age online n/a n/a 184 end-to-end-error 100 100 97 0 0x000000000000 prefail online yes yes 185 attribute-185 93 93 0 n/a 0xffff07000000 old-age online n/a n/a 187 reported-uncorrect 1 1 0 68350 sectors 0xfe0a01000000 old-age online n/a n/a 188 command-timeout 100 83 0 8752798774 0x361cb5090200 old-age online n/a n/a 189 high-fly-writes 100 100 0 0 0x000000000000 old-age online n/a n/a 190 airflow-temperature-celsius 57 43 0 43.0 C 0x2b002b2a0000 old-age online n/a n/a 192 power-off-retract-count 88 88 0 14487 0x973800000000 old-age online n/a n/a 193 load-cycle-count 88 88 0 14487 0x973800000000 old-age online n/a n/a 194 temperature-celsius-2 139 120 0 43.0 C 0x2b000e003900 old-age online n/a n/a 196 reallocated-event-count 100 100 0 20 0x140000000000 old-age online n/a n/a 197 current-pending-sector 97 46 0 225 sectors 0xe10000000000 old-age online n/a n/a 198 offline-uncorrectable 100 46 0 0 sectors 0x000000000000 old-age offline n/a n/a 199 udma-crc-error-count 200 200 0 0 0x000000000000 old-age online n/a n/a server@server:~$ Thank you all in advance!
-
Hello All, So I am rebuilding my home server and I am setting it up in Raid 5. Previously, I was just adding new hard drives to the home folder using /etc/fstab. I am already very proficent in ubuntu server, however this would be the first time setting up a raid array on it. I have read mixed thoughts on unRaid. But I also noticed Linus trying it out in a couple episodes. So basically my queston is this, am I at any kind of disadvantage using mdadm vs unRaid? I realize they are two different raid solutions. Please share your thoughts and/or experiences using one vs the other. 120GB SSD for primary (or cache if unRaid) and 3 2tb WD blue drives. Also in terms of adding disks to the array later, I would assume unraid would be easier. Am I wrong? I primarily use the server for Plex, a usenet client, and a few other programs that run with the help of mono. I also use SMB and occasionally I use it for hosting on CS:GO. Thanks in advance!