Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
guru.gti

RAID on Ubuntu with Asrock b350 Pro

Recommended Posts

Posted (edited) · Original PosterOP

I am planning to use VMware workstation for a small home lab. The idea was to have a RAID 0 (stripe) for datastore. 

 

I have managed to do a dual boot of Windows 10 and Ubuntu. The windows install has detected the RAID after a lot of struggle (found the drivers from the AMD webpage) 


https://www.amd.com/en/support/chipsets/amd-socket-am4/b350  

 

The other thing that helped to run the RAID drive on windows was the RAID drivers floppy that is available on Asrock webpage: 

 

https://www.asrock.com/mb/AMD/AB350 Pro4/#Download  

 

System specs:

 

1) Asrock Ab350 pro 

2) Ubuntu 18.04.2 LTS

3) 64 gb ram  

4) AMD 2700X

 

 

Now I am stuck with the Ubuntu OS , it is not detecting the RAID and there are no driver links available on AMD or ASrock. Let me know if anyone has managed to get this working ??? 

 

Update: I found the Linux drivers , AMD just updated their website. I have a Microsoft background and now struggling with the driver installation bit. 

 

Anyone familiar with driver installation on Ubuntu ??? 

 

https://www.amd.com/en/support/chipsets/amd-socket-am4/b350

 

 

======================================================================================================================

 

Reason: I need the RAID 0 for VMFS datastore and VM disks are going to be hosted on them. The products that I am planning to use are:

 

1) ESXi

2) vCenter with HA

3) vRA

4) Horizon view

5) A few test VMs 

 

These machines are for a test Lab. I work a lot with VMware products. Last but not the least I will be doing a backup twice a month. Will be using Veeam Backup for the VMs. 

 

Guru

Edited by guru.gti
Link to post
Share on other sites

Instead of doing a raid0 why not just attach a small SSD for cache? It'd make your life easier. Even if the RAID 0 is just to double your usable storage I still wouldn't recommend it especially not booting off of it. Dual booting just adds insult to injury once one drive fails.

 

You may find the data completely disposable but you'll still spend hours re-setting up everything to fix what failed.

Link to post
Share on other sites

Why use vmware workstation? If you want a homelab, just use vmware esxi and run a full vmware operating system, or use one of the many other hypervisors like proxmox, hyper-v, xen or others.

 

Don't use the raid in the motherboard, it sucks, you want software raid here.

 

If you want to run vms in ubuntu, give kvm shot. Its the native linux hypervisor and will work the best here.

 

Id disable raid on the board, and make the raid array with btrfs or mdadm here.

Link to post
Share on other sites

Im guessing this is a main PC or general use computer hence installing in VMware Workstation? 

I have a lab like that with 2 virtual esxi hosts, vcenter appliance + psc, 2 domain controllers and a couple of clients so I can break it and redeploy as needed. I just used a local disk rather than a raid since you really shouldnt be running anything vital in a Workstation setup....

 

Im wondering what the logic is in doing a dual boot environment?


Spoiler

Intel i7 3770K @ 4.6ghz | EVGA Z77 FTW | 2 x EVGA GTX1070 FTW | 32GB (4x8GB) Corsair Vengeance DDR3-1600 | Corsair H105 AIO, NZXT Sentry 3, Corsair SP120's | 2 x 256GB Samsung 850EVO, 4TB WD Black | Phanteks Enthoo Pro | OCZ ZX 1250w | Samsung 28" 4K Display | Ducky Shine 3 Keyboard, Logitech G502, MicroLab Solo 7C Speakers, Razer Goliathus Extended, X360 Controller | Windows 10 Pro | SteelSeries Siberia 350 Headphones

 

Spoiler

Corsair 400R, IcyDock MB998SP & MB455SPF, Seasonic X-Series 650w PSU, 2 x Xeon E5540's, 24GB DDR3-ECC, Asus Z8NA-D6C Motherboard, AOC-SAS2LP-MV8, LSI MegaRAID 9271-8i, RES2SV240 SAS Expander, Samsung 840Evo 120GB, 2 x 8TB Seagate Archives, 12 x 3TB WD Red

 

Link to post
Share on other sites
Posted · Original PosterOP
On 3/11/2019 at 9:42 AM, Windows7ge said:

Instead of doing a raid0 why not just attach a small SSD for cache? It'd make your life easier. Even if the RAID 0 is just to double your usable storage I still wouldn't recommend it especially not booting off of it. Dual booting just adds insult to injury once one drive fails.

 

You may find the data completely disposable but you'll still spend hours re-setting up everything to fix what failed.

The System boots off a Samsung evo 850. 

 

I need the speed of RAID 0 not the storage space at a lower cost. Not sure if people use RAID 0 to double storage 😉

 

Its a two TB RAID 0 and I am planning to add one or two of them this weekend before it comes alive on the Ubuntu. 

 

 

Link to post
Share on other sites
Posted · Original PosterOP
On 3/11/2019 at 9:46 AM, Electronics Wizardy said:

Why use vmware workstation? If you want a homelab, just use vmware esxi and run a full vmware operating system, or use one of the many other hypervisors like proxmox, hyper-v, xen or others.

  

Don't use the raid in the motherboard, it sucks, you want software raid here.

 

If you want to run vms in ubuntu, give kvm shot. Its the native linux hypervisor and will work the best here.

 

Id disable raid on the board, and make the raid array with btrfs or mdadm here.

Using Workstation keeps a lot of stuff streamlined (Believe me just installing ESXi on one host is going to be a nightmare , the setup will become much more complicated and I will miss out on the occasional fun of playing some games), Using Hyper-V seems like a good idea here and will revive some old Memories of working with MS. I might give it a try when I build another system. 

 

I have always believed that software RAID is a bit slow when compared with hardware RAID. I might be mistaken as technology changes everyday. Let me know your thoughts on it. 

 

Lastly Using KVM won't be of much advantage as very few Corporates are using Red Hat's solutions for virtualization. One would spend a lot of time learning the stuff , just to find that no one else uses it. 

Link to post
Share on other sites
16 minutes ago, guru.gti said:

I need the speed of RAID 0 not the storage space at a lower cost. Not sure if people use RAID 0 to double storage

They do. It's a perk of RAID0 you get the cumulative storage of the drives combined. Again a SSD could be used to accelerate a single drive as oppose to using RAID0.

 

4 minutes ago, guru.gti said:

I have always believed that software RAID is a bit slow when compared with hardware RAID. I might be mistaken as technology changes everyday. Let me know your thoughts on it. 

The only difference between hardware RAID and software RAID is dedicated hardware. It's still software RAID just that the CPU handles the RAID instead of a dedicated controller. Performance is going to vary widely based on choice of hardware and File System but for the most part software RAID is a great solution today and is used widely. It may work well for you if you were to virtualize the installations. You could even attach an SSD to the pool for read/write caching.

Link to post
Share on other sites
16 minutes ago, guru.gti said:

Lastly Using KVM won't be of much advantage as very few Corporates are using Red Hat's solutions for virtualization. One would spend a lot of time learning the stuff , just to find that no one else uses it. 

Yea, but almost no one uses vmware workstation either for large clusters, and all the hypervisors are basically the same, just have the buttons in different places. KVM just works a bit better on linux since its native. If you want to learn a hypervisor, esxi is good to know, but just fire it up in a kvm vm and play with it(make sure to turn on nested virtualization).

 

And tons of people use KVM, like AWS, google cloud, and many other large providers. 

 

And really its not hard to learn. Its just pick cpu, ram, disk and then do what you want in a vm.

18 minutes ago, guru.gti said:

I have always believed that software RAID is a bit slow when compared with hardware RAID. I might be mistaken as technology changes everyday. Let me know your thoughts on it. 

You don't have hardware raid here, you would need a raid card for that. The raid on a motherboard uses the cpu for calculations anyways. Software raid is about the same speed if done well, esp with this small of a setup with only 2 sata ssds, where you really aren't seeing many iops.

 

 

 

Link to post
Share on other sites
Posted · Original PosterOP
On 3/11/2019 at 3:31 PM, Jarsky said:

Im guessing this is a main PC or general use computer hence installing in VMware Workstation? 

I have a lab like that with 2 virtual esxi hosts, vcenter appliance + psc, 2 domain controllers and a couple of clients so I can break it and redeploy as needed. I just used a local disk rather than a raid since you really shouldnt be running anything vital in a Workstation setup....

 

Im wondering what the logic is in doing a dual boot environment?

Agreed that it's not a vital setup but it takes a lot of time to create it. It's almost like making an aquarium and the fishes die if not taken care off. 

 

I got the dual boot coz somethings just can't be done on linux. I am relatively new to Linux hence it gives me the opportunity to learn simple linux tasks as well as have some fun on Windows. 

Link to post
Share on other sites
Posted · Original PosterOP
2 hours ago, Windows7ge said:

They do. It's a perk of RAID0 you get the cumulative storage of the drives combined. Again a SSD could be used to accelerate a single drive as oppose to using RAID0.

 

The only difference between hardware RAID and software RAID is dedicated hardware. It's still software RAID just that the CPU handles the RAID instead of a dedicated controller. Performance is going to vary widely based on choice of hardware and File System but for the most part software RAID is a great solution today and is used widely. It may work well for you if you were to virtualize the installations. You could even attach an SSD to the pool for read/write caching.

I like the suggestion of using the SSD as cache , can one SSD be used for caching purpose for multiple SATA disks. Let me know if you there is any blog or link that can help me with this .... I would also like to know if I can benchmark disk performance in this setup. 

 

 

Link to post
Share on other sites
1 hour ago, guru.gti said:

I like the suggestion of using the SSD as cache , can one SSD be used for caching purpose for multiple SATA disks. Let me know if you there is any blog or link that can help me with this .... I would also like to know if I can benchmark disk performance in this setup. 

 

 

There are multiple ways to cache a drive in linux, lvm cache, bcache, and zfs being the main ones. You also use a partition for caching instead of the whole drive.

 

You can benchmark it, but benchmarking cached storage is a pain as your often just benchmarking the ssd or hdd.

 

 

Link to post
Share on other sites
8 hours ago, guru.gti said:

I like the suggestion of using the SSD as cache , can one SSD be used for caching purpose for multiple SATA disks. Let me know if you there is any blog or link that can help me with this .... I would also like to know if I can benchmark disk performance in this setup. 

 

 

Using Linux for example specifically ZFS you would create a pool (RAID) of drives and assign the SSD to the pool. The SSD can be partitioned to benefit read & write operations however at least with ZFS it will only speed up things like virtual machines and databases. This is where Proxmox or ESXi would come in.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×