Search the Community
Showing results for tags 'jbod'.
-
Currently, I'm using an E-ATX computer (huge case) with a dual-CPU motherboard, for my plex server. I want to transition to a more compact, maybe something like NUC while maintaining the ability to connect up to 6-8 drives and continue using my zpool. I also use a Raspberry Pi for my HomeAssistant setup, and I'm considering merging it into this new NUC setup for efficiency. Here are the details of my current setup: E-ATX computer with dual-CPU motherboard (E5640) 16G RAM ZFS support for storage Raspberry Pi for HomeAssistant Maybe have the possibility to add a GPU for future encoding needs. I'd appreciate your guidance on the following: How to connect up to 6-8 drives to the NUC. Advice on migrating the zpool to the NUC while preserving data. What kind of NUC could be on par for performance from the previous setup or much better Or maybe this is all wrong and maybe I shouldn't go with NUC? I'm open to alternative recommendations, including options that could be rack-mounted. Maybe something like a compact rack size. Thank you in advance for your assistance!
-
LTT'ers I need help! I am a complete noob when it comes to anything related to mass storage (outside of using 4 & 8-bay external enclosures and basic RAID configs) and my use case for mass storage IS GREATLY increasing at a rapid rate. I am currently weighing options and it seems that for my use case a JBOD DAS will work. But this is not totally clear to me and if require to switch to a better solution I can do so. I really just need to figure out what direction I need to execute on. I found these that set me on that path: https://www.servethehome.com/sas-expanders-diy-cheap-low-cost-jbod-enclosures-raid/ https://forums.serverbuilds.net/t/guide-direct-attached-storage-das-add-up-to-16-3-5-drive-bays-to-an-existing-server-for-less-than-300/136 My workflow is as follows: I am writing data to 3.5" HDDs (4TB-12TB capacities) in a 2-bay hot-swap dock, 4TB per day, on 3 different systems (12TB total). And once the writing process is complete I do not need to read or write from that disk, but I do need to see it and have it connected to a single system, I do not need network storage of any kind. Which is what leads me to think that a JBOD DAS is the best bet? And I can connect to one of my systems via an HBA? In the near future I will look to redundancy and higher capacity drives. However, my short-term (2-3 month) focus is just getting the best value in terms of $/TB and getting as many drives connect as possible for as cheap, but effective and safe, as possible. This is new to me but my workflow just drastically changed and I am trying to learn and implement as fast as possible. But between juggling this and other areas of growth, I am a bit lost on the data storage side. For my use case I require the following: 1. 15-bay enclosure at a minimum, 4U is fine, Ideally under 650mm long 2. Power of scaling, is there an effective way to daisy chain? IE if I add 3x more 15-bay solutions is there a possibility to daisy chain them together? 3. If I go DAS and connect via HBA, how does this limit my PCIe lane usage on a system. And would connecting it to a B450i/R3 2200G or Z490/i7-10700k work? Or is the only solution a prosumer platform? For more info required that could help please let me know. Like I said, I am lost in the sauce on this one, and need guidance. Thanks (a ton) in advance!
-
Hi all, I have an Orico External Enclosure that I've been using for a couple of Months. I have a 4TB Toshiba Drive installed and I'm receiving another 4TB Drive from them soon. I'm planning on using JBOD as the first Disk already has data on it. The Enclosure has settings for RAID and JBOD. I haven't changed it yet but I'll need to do it now. How do I exactly setup JBOD? Do I just turn it on and then wipe the second Disk then format it to NTFS? Or do I need to configure something within Disk Management? Thanks for the assistance.
- 8 replies
-
- storage enclosure
- harddrive enclosure
- (and 3 more)
-
Hi, is there a way to switch a LSI 3ware 9750-8i to JBOD? I was under the impression that it's possible, yet the option in the BIOS only shows 'N/A', and 'N/A' is the only option there.
-
Hi All This isn't my priority right now (waiting to move into the house first and get other things set up) but I'm wanting to sort out a home server with quite a lot of storage, likely over 100TB, that will get to this level in the next year and increase over time. The main uses for this will be the below. I don't think we're ever going to be hitting it incredibly hard. Mass storage for family's work and leisure files, try to keep local files to just programs for the most part. Plex, we live in China and due to censorship, almost everything we watch is via VPN or downloaded. VPNs really aren't always stable here so typically just used as a backup. 4 people intermittently accessing files, but 4x4k playback may be likely in a few years, 2x4k more likely for now. My main requirements are: Can be audible but needs to be a low hum during typical usage. My existing router and switch measure ~50db, I'd like the servers to be the same or lower. Part of buying a larger rack was to be able to cool components without needing server fans to blast air in a tight space. It can be a few Us tall, no need to purposely make it compact. Energy efficient, would rather invest in something efficient and recoup costs over time. Redundant PSUs, would be nice, at least for whatever system has the storage. When moving in I'll have the below, listing main things so you can see what's available in case it influences the decision: 42U rack (thank god my wife also appreciates tech), mostly bought for expandability moving forwards. I haven't decided on a specific one yet, the one I'm looking at has a mesh door. I'm not sure if this is something I should go for considering the noise factor, instead of glass, but overall it seems like a very reasonable price from a reputable brand here. Router: UDM-Pro Switch: USW-Pro-48-PoE HASS Server: NUC Not sure if this can/should be used to also run the Plex server as I think it can be installed? Kind of wanting to just keep this dedicated to smarthome stuff for now. Raspberry PIs: These may not be there at the start, probably 3-4 total and looking to 3D print a 1 or 2U housing for them. Again, I could look at getting one for use as the Plex server if it can handle the streams. 1 UPS to start, a 2nd after a while and seeing how long things last on 1. Will need to configure the UPS to turn off everything in the rack apart from HASS, router and switch. Considering the above, what do you think would be the best solution? If I go for a dedicated server, I'm thinking I'll have far too much headroom that I'll never use and waste money on it, as well as incur higher energy usage costs. Though I still am a bit lost regarding Plex usage as people report so many different things regarding concurrent streams and how intense it is, especially with high bitrate. If I go for a JBOD with a PI, NUC or SFF as the server, I may give up certain functionality such as ECC and redundant PSUs which would be nice. Let me know your thoughts and if you need more info to make a better assessment! Many thanks for any help!
-
Title is pretty much exactly that. I recently purchased an LSI 9207-8e, only to find out that it was improperly set up for use in a JBOD system, plus it has OLD firmware that I'd rather be upgraded. I made my efi boot medium, put the firmware and the BIOS in the USB, wiped the firmware of the card and tried to install the new Broadcom P20 firmware, only to be greeted by an error. NVDATA image does not match the controller revision. And it fails, endlessly. I've worked out that's because HP are asshats that can't be friendly to anyone and choose violence on a daily basis. Turns out that an HP rebranded card, even if it's the same physical hardware, will not accept anything that isn't an HP firmware update because reasons. Here's the current situation: 1. I have an HBA without firmware that's ready to accept firmware, no other problems. 2. Sas2flash will not flash Broadcom firmware to HP branded cards with release P15 or above. 3. Sas2flash will only ignore that it's an HP card and push the firmware anyway if it's the P14 release or earlier. 4. Broadcom never made a P14 release for UEFI as far as I can tell. 5. None of my hardware at home can support BIOS32 Service Directory (everything is UEFI), and thus sas2flash will not work on MS-DOS or FreeDOS, instead it will give me a PAL error. 6. I can't flash it via Windows without at least *some* firmware being in the card. 7. HPEs website is a labyrinth that doesn't appear to offer a full firmware file for me to flash. Essentially: I can flash it via DOS but I can't actually run the commands because my hardware isn't old enough. I can't flash it via UEFI since Broadcom didn't release an old enough version. I don't know what I'm doing wrong with Linux because nobody documents shit. Maybe HP has a firmware but I can't find it. Anyone got an idea on this? The Linux installer for this one just... doesn't work on Ubuntu 12.04 to 20.04, I'm defo doing something wrong but this is extremely poorly documented and I'm at a loss. Help appreciated.
-
I got an old server from work for free that has some Raid Cards in it but when I slot in my 8TB Drive it sees it as 931 GBs. I've updated the firmware after MANY issues and the issue is the same. I've heard about flashing it into IT mode but I don't want to go through all of that if it's not going to resolve the issue. Does anyone know if that would allow it to detect the drive correctly or is it a waste of time? I don't really need RAID just a JBOD. If I'm out of luck what are some other raid cards I could use that are in a good price range that would work with those 8TB Drives?
-
Hi guys, I've been trying to get this to work the entire day. On the Lenovo System x3650 M5 with the ServeRAID M5210 Controller it does not seem possible to convert hotplugged Disks to a JBOD without rebooting and doing it in BIOS. I can only convert existing JBODs to Uncofigured Goods, which is not great for replacing dead drives in a Proxmox ZFS pool... Does anyone have experience with the particular Hardware and can help me out here? Those are the steps I've taken so far: -Disconnect the Battery and Cache from the RAID Controller, since JBOD mode can't be enabled otherwise -Tested different drives Here are some screenshots of the IMM:
-
Hi 1.Now I have migrated 4 3gb discs (jbod) from 412+ to 1815+, and I have 4 new 3gb discs. Can I add the new discs to a new volume make a SHR raid 5 then move all files from volume 1 to volume 2? Can I then delete volume 1 the move the new 4 new discs to slot 1-4 and the old discs to slot 5-8? And extend my SHR raid 5 with them? Will volume 2 change to volume 1 if I can delete old volume 1? 2.Or can i do this ? Put a new disc in make a new volume copy that full with files from jbod discs, then put another disc in make a new volume again put more files in and repeat until I have backup of all files. The do shr raid 5 volume, then use my System configuration backup. Then put the backup discs in one after one and copy files over then take away backup volumes and use the discs to make the shr raid 5 bigger. regards Johan
-
I'm looking for a Just-a-Bunch-Of-Disks Direct Attached Storage chassis with the following criteria: minimum 12 hot-swap 3.5" drive bays rackmount 2U or 3U max depth 500mm redundant psu would be nice under £1000 available easily in the UK Can you help me?
-
Hi everyone, I'm looking into building a new server to host the things that keep my main PC on 24/7. Mainly plex, a webserver and some game servers hosted for friends. My plex library is really the biggest hurdle, as currently I just split it across a few hard drives in my gaming rig, and it is entirely susceptible to drive failure. If possible, I would like to find a way to have some fault-tolerance (with the ability to recover from a failed drive by replacing the drive) in a way that I can continue to expand the storage, thus negating the need to go out and spend $2000 on hard drives all at once to build a raid in a vain effort to future proof my storage capacity while providing fault tolerance. Is there anything (ideally on Windows/Windows server, but linux would be fine too) that would do what I'm looking for? Also, yes, I know, backups. I'm not having full backups for an 8TB+ media library. Thanks in advance for any insights you might have!
- 14 replies
-
- fault-tolerance
- raid
-
(and 2 more)
Tagged with:
-
So I have the following drives installed on my PC Plextor PX-128S3G Kingston SUV400S37120G HGST HTS541010A9E680 WDC WD10EZEX-21WN4A0 The first one is an M.2 SATA SSD having 128GB of space, the second one is a 2.5" SATA SSD of 120GB. The third is a 1TB laptop drive I salvaged of my Laptop, and the fourth one is a 1TB WD Blue HDD. What will be the optimal use case for my Windows install? I have all important data backed up already and am ready for a clean install. The problem with building a bootable RAID 0 with the two SSDs is that they use different controllers and RAIDXpert2 does not let me create an array for them. Also, I cannot boot into the legacy manager by pressing Ctrl+R or Ctrl+I at POST. I was thinking about a single drive install and then striping data among the drives, but it would be useless as the SSD would be held back by the HDD. Moreover, RAID is mostly not preferred due to software problems and increased boot times due to recalculation of array parity. I can install my OS on one drive and use symlinks for all software I need, but that renders one SSD as redundant, and I'd really appreciate it if I had just one Volume, but am willing to compromise upto two drives (one for OS and other for applications). I use some programs that need to be installed on C drive (like VS2017, iTunes etc.) A JBOD can be used but I still am limited by the controller on my motherboard. I'd prefer if there was no need to buy a RAID Controller as I am out of PCIe slots. I did have a license for the Enmotus FuzeDrive for AMD, but I had to refund it because SMART monitoring tools would cause Windows to crash as the virtual AHCI controller did not play well with software like HWINFO64. I have attached the Driver Controller Information, if anyone requires the data. TL;DR Bought a bunch of storage, have jack idea about how to use them. Driver Controller Information.txt
-
Hello there, I have a Qnap Ts-451 currently with 3 fresh 3TB HGST drives. I wanted to set it up and Have some trouble deciding or even understanding what could be the best setup for me. I'm a Video editor and need the Qnap as a good backup for running projects. SO, 1. Since It's a backup, should I use any kind of Raid ? (I don't mind buying another drive for something like a Raid 10 for instance) 2. If I want it to be non raid, How do I set it up ? when I go to the disk setup I only get either a raid option or a JBOD which for me seems like a bad choice since one drive failure means something between A big mess to a complete data loss. I don't see an option to treat the Drives separately . Any other thoughts ? Thanks, Ariel
-
Hi, recently I purchased a HP Microserver N36L for use as a basic file server. It currently has 3GB of RAM (ECC) and I have been looking to use it as a small file server. What would be the best way to configure the server for this as I have read on FreeNAS but this appears to need more RAM and (For best practice) an SSD (Avoid flaky USB drives) for the random writes which in turn would mean I would need to buy JBOD card due to the servers lack of ports. However is this the best setup or am I missing some secret gold mine somewhere as NAS4Free and Server 2012 R2 are looking like promising candidates without needing expensive devices such as the IBM M1015 cards etc. So what does everyone recommend?
- 22 replies
-
- microserver 7
- freenas
- (and 4 more)
-
Thank you for reading this message and helping me. Context: I recently had a NAS (DNS320L) that was faulty and I sent it back. It was a 2 bay NAS and I has a WD Green 1TB drive and a WD Red 4TB drive. I therefore set up a 1TB Raid 1 and a 3TB JBOD on the remaining space on WD Red. I returned the DNS320L however I want to recover my files. Since nothing actually went wrong with the hard drives or RAID I bought a Hard drive dock to usb and connected the WD Red drive and copied all the files on the RAID 1 to my computer. However, that is all that shows up, the JBOD files do not show up. Problem: I installed "DiskInternals Linux Reader" and was able to see what was going on:(as seen in images attached) It seems as if the 1TB Green drive thinks it has the JBOD on it, and when I open a folder view of "HDb2" I see the folders of what I put in the JBOD. However, I am guessing that the actual data is on the 4TB red drive. So now I am wondering if I need to have both drives connected simultaneously (by buying another hard drive dock to usb) in order to copy the JBOD files to my computer, or will this not work ? Thank you for your help,
-
So I am planning on getting WD My Book Duo 4TB and it is basically an enclosure that comes with two 2 TB internal hard drives and according to Western Digital they come configured in RAID 0 but with an option for RAID 1 and JBOD. So after some research I noticed that JBOD seems to have two meanings that contradict each other, one is that JBOD will see all its physical drives as a single drive, so using my product as an example, my computer would see it as a 4 TB drive, but the other meaning is that JBOD, unlike RAID configurations, are seen as separate drives so instead of my computer seeing it as a 4 TB drive, it will see it as two 2 TB drives. So I'm just wondering as to what's going on here. My Book Duo will contain two physical drives, and for the first drive I would like to store my videos on it, and for the second drive I would like to use it as a back up for the videos on my first drive. I know having it in RAID 1 will do just that, however, I'm also worried about corrupted files, accidental deletes, and viruses so I would like to have my back-up drive update every night to give me a chance to fix any data problems as apposed to having a complete mirror which wouldn't allow me to do such a thing. So I guess my other question is: How do I make sure my JBOD is configured as two separate physical drives Edit: Sorry for double post, Idk how that happened
-
So my synology DS215+ had a JBOD array failure. I would be fine, if i was able to backup my data in time. Lets start from the beginning: I ordered a Synology DS215+ and 2X Crucial 240GB SSDs. I put them in a jbod array and had it running for a few weeks with some vms from a proxmox server. I finally decided it was time to transfer my windows user files to the nas. All went well and I was up and running. A few minutes later, I heard beeping coming from the nas. The second drive of the array failed. What I'm wondering is - Since I only had about 170GB on the nas, are all my files on the primary, still running drive? Or did I lose half of my files because it put half of them on one drive and half on the other. The NAS failed before I could do any backups. Thanks, Kyle
-
Hi! I am wondering what the best NAS configuration for my use cases might be. Looking forward to your feedback. First I start with my network setup: Now to my current configuration: PC: 1x SSD: System drive with OS and programs 1x HDD: 500GB (rather old drive, 7.2000RPM) for documents, pictures, videos, music etc; It's an archive. At the moment 350GB of that 500GB drive is used. NAS: It's a rather old Synology DS209j NAS from 2009. 1x HDD: 250GB drive (even older former desktop drive, 7.2000RPM) 1x HDD: 250GB drive (same drive; recently crashed) - NOT USABLE ANYMORE They were configured as normal drives with 2 seperate partitions (no RAID or JBOD). So almost full 500GB were available. I run a weekly ECHO backup, meaning that all changes made on the archive drive of my PC are echoed to the NAS. This way it was also no problem regarding data loss that one of the NAS drives crashed because all the data is still on the origin dirve in my PC. Since one NAS drive crashed I have to change something in order the have a backup solution again. It's not expected that the demand for archive space will explode in the near future - still scalability for a reasonable price would be a bonus. I am thinking one of the most cost effective and still properly working solutions would be to get a 1TB drive for my PC as a new archive drive and to put the old 500GB into the NAS. For the 1TB drive I would choose the WD Green with max. 5.400RPM. I read good things about and top speed is nice but not crucial. I assume it will be faster then my old 500GB drive anyways since it's a SATA 6GB/s drive and the old one is not. The NAS would then have: 1x HDD 250GB 1x HDD 500GB Configured in JBOD it would be one parition (a bit easier to handle than 2 partitions). I wouldn't choose RAID0 because I think my old NAS would be to slow to handle this. -> TOTAL of 750GB of backup space (that's 250GB less than I might use up in my PC - that's one of the drawbacks) I am really curious what you think. Greetings and a happy new year from Europe, Austria
-
Having had a beast build in mind for a few months, now it comes down to the actually planning and decision making part. My general thinking, storage-wise, has been to use M.2/PCIe (though I suppose NVME is a thing now) for the boot drive, 3 or 4 SSDs in RAID5/10 for the "working" disk space, and then fairly massive bulk media storage, 6-8 3TB WD Reds in RAID... but there's where I start to lose whatever certainty I've had about how I want (or need) to go about it. I guess what it boils down to is: I'm going to need a controller card [edit: for the HDDs], but whether I just need SATA expansion in JBOD mode for software RAID, or whether I should go with a full-blown RAID card... I just don't know. Additional: this being my first time in the market for such a card, I don't quite "get" them, specifically, the connectivity between the card and the drives. I see things like 2-port SAS/SATA cards out there, but I gather that the number of ports isn't actually the number of drives one can connect to/through these cards...? Or am I looking at investing in a 8-port controller card? I've tried reading manuals and specs online, but... MEGO. Suggestions/recommendations would be most welcome.
-
I have 4 of these: Rackable 16 Bay SE3016 JBODs. Atm, I'm only using 1 of them, with 16x 4TB WD Reds in it. I have replaced those jet engine Delta fans (2 case and 1 PSU) with 3x 120mm Cougar CF-V12HP fans. What I would like to ask: I am about to connect another 1 or 2 jbods to my server and I am looking for the best fans to replace the original deltas. Like the topic says - I would like them to be as quiet as possible, but still be effective, the hdds these days are fickle b***es, no need to cook them as well I was thinking maybe Noctua fans? Please recommend. I do not care how fans look. Also, I'm not rich, but rather too poor to buy cheap stuff, so quality > price concern. If you need any other info, please lemme know. Thank you in advance. Edit: added some pictures of the JBODs back, where the fans would go... This is the back of the case with 2 Delta's, which are seriously loud, even though the loudest is the small 40mm Delta in the PSU, which has something like 13000 rpm, iirc. It's f***in jumbo jet. This is the look from behind. I have taken the back panel off and use a makeshift back panel with 3x 120mm Cougars, like I mentioned before. This is the backplanes, which obstruct most of the airflow, imho. That's why probably I need to think about fan's pressure rating. Although, methinks this JBOD is quite better with the airflow than Norco cases, at least they are better than Norco 4220 I've used couple of years ago. These have more open drive cages at the front.
-
Hey internet! In the near future I'm looking at building a hackintosh* computer just for general everyday use for various reasons, one of which being my gaming pc turns my very small room into a sauna during the summer thanks to its dual AMD GPU's (seriously who needs a heater with those things?) I would like to go 100% SSD with the system but since I'd rather not spend 50% of my budget just on storage to get about 512GB-1TB straight off the bat, I was thinking of doing a JBOD setup with 256GB SSDs. This would be fine if I were buying the SSD's straight up but the thing is...I'm not. So the question at hand is after I install the OS onto the first SSD and buy another one say 3 months later, is it possible to add the second SSD to the array without having to rebuild it/reinstall hackintosh? Personally I've never done RAID before (apart from my gaming pc which used 1TB drives in RAID 10 - straightforward one time setup and forget so far) so getting into more complex tasks such as adding to an existing array or just RAID on OSX in general is an entirely new world to me. Thanks in advance for all your help! *Plz dun h8 mi
-
so im planning a new build for myself and i already have some of the parts i plan on using in my current build/special box under my bed. in my PC is a 120 GB ssd im running win 7 off of. under my bed is a 64 GB ssd. what i want to do is set up the 2 drives in my new build so that the PC reads them as a single 184 GB drive. i believe that this can only be done with JBOD because for raid 0 and they would have to be the same size. i am also under the impression that the mobo needs to be compatable with JBOD. I am looking at the asus z87 pro or the the msi GD-65 boards for my new build. so basically this is a 2 part question: 1) do these boards even support JBOD or an alternative means of getting the end result? 2) which board would you get if you were making a new build?
-
So, I searched the forums and couldn't really find anything. I have 2x 16 bay external jbod enclosures. So I bought an Adaptec raid card that allows 128 disks. What I didn't realize was that it only allows 24 volumes. This means I can only have 24 raid volumes or non-raided disks. So it doesn't work very well as a jbod card. I did some further research and found that I need a card that supports jbod pass-thru. I found of a lot of recommendations on other forums, but they were all many years old. Does anyone know of a raid card that uses SFF-8088 or 8087 mini-sas connections, and supports more than 32 disks? It needs to have at least 2 ports on it, internal or external doesn’t matter. Really I am just looking for recommendations on an easy to use brand and or model. Thanks in advance.
-
Hey guys, Can you please recommend me a cheap SATA controller card that I can use for JBOD (with WD Reds) and use ZFS on. Edit: I will need at least 4 SATA ports. <$100NZD would be great. You can use http://pricespy.co.nz to see the cheapest NZ prices for parts. I don't mind buying online from an international retailer as long as shipping does not cost an arm or leg. Thanks
-
Hi all, So I'm buying a new 1TB Hard Drive as my current one is almost full, but when I install the new one I would like to just merge the two so windows sees it as one, simple as that, I'm not interested in any RAID or anything, there's nothing important enough on my current drive to care about any protection. With that in mind I was researching to see which would be the best way to let Windows see the two drives as one and it seems JBOD is the way to go, but I have no idea how to set it up. So what would be the easiest way to go about doing this? Do I need any additional software or can I do it straight from Windows? Obviously I would like to spend no money while doing this, so a way to do this for free would be good. Thanks, Paul.