Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Jarsky

Member
  • Content Count

    3,036
  • Joined

  • Last visited

Everything posted by Jarsky

  1. Personally I haven't done a "fresh" install of Windows since Windows 8 back in 2012 on my Gaming PC. I've just constantly done in-place upgrades and cloned to new SSD's. I'm now 3 generations of hardware in, with 4 generations of GPU without the need to fresh install Windows...it's just an unnecessary headache these days to reconfigure everything. I typically have just used Clonezilla personally. And unplug any additional disks during the clone and first boot process. After the first boot, also check "Disk Management" to make sure that your partitions are all ok. Off memory I think I used EaseUS to fix my partition layout, then unplug the old SSD to make sure everything is running fine without it. I typically keep my old SSD in its original state for about a month before deleting it and reusing/selling it.
  2. In addition to leadeaters comment, make sure you mirror the configuration across the banks for both CPU's as well otherwise it will still fail POST. HCDIMM's are quite expensive, so basically for the best support, use Dual Ranked 1.5v RDIMM's. So memory marked as 2R, and ECC Reg. Essentially the memory you're looking for will be marked as 2Rx4 PC3-8500R (not PC3L or 1R)
  3. Yup that is correct behavior, because "Devices" is your temporary storage (e.g usb key, external hard drive, optical drive, etc...) which is mounted under /media Also if you type df -h, you should now see your 3.7TB mount for /mnt/cache. If you want a shortcut, you should be able to navigate to /mnt and right click on the cache folder and 'Send to...' to create a shortcut.
  4. so this is your actual raid: /dev/md0p1 ext4 cache (not mounted) e2b3039e-d1e6-4bf9-b179-e26157403205 so your FSTAB entry should look like this: UUID=e2b3039e-d1e6-4bf9-b179-e26157403205 /mnt/cache ext4 defaults 0 2 then once you've done that run sudo mount -a
  5. Well thats positive, so at least it's working. This is your issue here. Even though you created /mnt/cache; that UUID is invalid, which means its not mounted to that mount point, hence why changing permissions there had no effect on your raid. You should edit your /etc/fstab and remove the entry you made in there for /mnt/cache. If you want further help on this we'll need the outputs for the below to give you accurate instructions sudo blkid -c /dev/null -o list cat /etc/fstab Otherwise if you're happy with it working as is then I recommend marking Nick7's answer
  6. That's really odd, because /media is the mount point for removable media, so should only apply to things like external HDD's and USB keys, not to fixed disk / raid. Have you tried rebooting or running sudo umount -a && sudo mount -a to remount the share? Can you run ls -la /mnt/cache and post the output? Can you also try putting a file in that folder (e.g touch /mnt/cache/test.txt) and then see if it shows in your "cache" folder
  7. Sorry if I don't believe you're building 2 x ~$50,000 system's just for home...feel free to prove us wrong though....
  8. Some of what you may experience is particular to Linux Mint and the desktop environment you're using which makes things a bit more specific when working in the UI. since there's no gksudo it seem they've removed GTK+ . I'm not familiar with Linux Mint, but it seems the command should be nemo admin:// It sounds like you just have a permission issue though, so honestly I would just edit it through terminal. You can simply open terminal and run sudo chmod 777 -R /mnt where /mnt is the path that you mounted it using fstab. That will give access to the volume for all users on the system.
  9. Theres no inherent reason why a Pi would fail if used 24/7, it's just an ARM based SoC platform.. This is one of my Pi's that has been on 24/7 for the better part of 8 years. (It started life as a Kodi box, and then moved into being a Webserver and these days is my secondary DNS server to my Pi 3b+...it reboots automatically every month for automated updates and patching (hence the 23 day uptime) The main thing to keep in mind is to periodically backup your SDcard...I just physically take mine out of my Pi's and clone them to a backup image about every 3 months, just in case a card fails.
  10. You're going to need: - A gate chip - Some wire - Gatorade - A Potato that will create a simple computer.
  11. I wouldn't say thats an issue if you're dealing with large file sizes. Connections can get broken, and FTP can automatically check and auto-resume transfers without having to re-transmit the entire file again. You can also do multithreading and segmented transfers with the right software. This is probably something the client will want to try and do since typically single threaded performance can often be quite poor internationally traversing many networks. Also: FTP Server could run on a potato. I can max my Gigabit even with mine which is a VM with only 1 CPU core and 1GB ram assigned to it.
  12. This is how much of the (512MB?) FBWC cache is dedicated to read and write as it alludes to. Typically would go with about 50/50 to 70/30 for a general purpose server. It depends which is going to be your most intensive function You can always adjust it as you need to.
  13. The simple answer is no. Your "internet" can only go as fast as the slowest link. If you have 1Gbit ethernet in your house, to your PC, and the plan on Verizon can only do 900Mbps, then you're limited to 900Mbps. Also the "Internet" doesn't have an absolute speed. The speed you get depends on the speed of where you're connecting to, and the rest of the connection past your ISP as well.
  14. Awesome work. When MD is first setup it needs to go through an initialization like most other RAIDs. Because you setup a RAID 0 there is no intial resync as there is no parity/mirroring, which is why it finishes straight away. You should really consider if you want to have a RAID 0 though. I understand it's "just a cache" but if a drive fails/drops out, you lose the entire RAID, so your cache has to re-download everything. Typically a RAID5 is more your go to configuration for something like this, to avoid complete data loss.
  15. If you want to see what actual drives are behind the /dev/mapper/pdc_ddajjhji then I recommend running the command sudo lsblk You might also try running using dmsetup, such as the below command: sudo dmsetup -v table /dev/mapper/pdc_ddajjhji This will tell you what sectors are allocated to which drive/partiion. Which you can match against the info from the previous lsblk command. IMO though, destroy the LD (Logical Disk) from your RAID configuration utility, and use the disks independently and just create a Linux MD Raid. Linux Mint I believe has the MDADM Utility by default: https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm
  16. Installing ESXi as a nested VM is the exact same process as you would have followed to install the original ESXi you're already running. Or perhaps i'm not understanding your question properly.
  17. Jarsky

    AMD Epyc

    That motherboard has a built in post code readout. What do the post codes say when you start it up?
  18. So you're going to pay an extra $100+ for an extra 2% efficiency over for example a 500W 80% Gold SFX? Considering that system has a draw of around 90W on max load, given its a file server and will mostly be idle around 40W you're talking approx $3/year saving. 25c/month.....it would take you 20 years of constantly running for that to matter.
  19. The latest generation from LSI are the 9500 series, which are 12GB SAS NVMe support with PCIe Gen4. There's no point getting these latest controllers for home use with HDDs. You could maybe do the 9300 series if you want to do SSD pools as well, but anything newer than that would be a waste of money.
  20. Is this thread a question, for feedback? Otherwise it needs more detail about requirement and budget...otherwise this thread should just be closed if it's a statement. Also the network card "Asus XG-C100C PCIe x4 10 Gbit/s Network Adapter" has compatibility issues outside of Windows. These cards like a few other 10Gb cards rely on an Aquantia driver...which may or may not be in FreeBSD, and then you also need a build of FreeNAS which has that latest FreeBSD build. The 750W PSU is well overkill for this system. You could easily save $$ here. Also that case is ATX and the PSU is a BTX. If you want a smaller PSU, i'd get like a 450W SFX and use an SFX to ATX adapter.
  21. "User Scripts" is just UnRAID name for a place you can put bash script, and set it up as a cronjob (timer). Essentially its Linux's version of "Task Scheduler" in Windows. The user "script" could be as easy as just 1 line with the word "shutdown now", and then set that user script to run at a specific time, to shutdown the server
  22. Sounds like a good plan That is it yes, The 9211-8i is not a RAID card. It has some limited RoC (RAID on Chip) functionality, but it has no cache, battery/eeprom backup, etc... Yup that is an M1015 Theyre essentially the same card. The difference is that the 9211-8i is an LSI branded card, while the M1015 is an OEM card. In this case the layout is a bit different due to it being made for a rackmount server, and the OEM cards have slightly modified firmware. I'd really recommend looking at eBay though for the cards, as those prices are far to high ~US$100 for cards that are 4 generations old now. These are what the pricing should be, about $30 9211: https://www.ebay.com/itm/LSI-6Gbps-SAS-HBA-Fujitsu-D2607-A21-FW-P20-9211-8i-IT-Mode-ZFS-FreeNAS-unRAID/143299825144 M1015: https://www.ebay.com/itm/LSI-6Gbps-SAS-HBA-LSI-9200-8i-IT-Mode-ZFS-FreeNAS-unRAID-2-SFF8087-SATA/124178147022 H310: https://www.ebay.com/itm/Dell-PERC-H310-8-Port-6Gb-s-SAS-Adapter-RAID-Controller-HV52W-Replaces-Perc-H200/192120843762 Theyre all the same card. Some vendors sell the M1015 & H310 pre-flashed with the LSI IT firmware, but you can always flash them yourself as well.
  23. Personally i'd get an i3 but it doesn't really matter, either of them are good CPU's. I'd drop that aftermarket cooler and just get 1 8GB stick, that will give you an extra $80 back. Also a cheaper case like a Corsair 88R or a Antec VSK10 will give you an extra $30-40, giving you ~$110-120 in either savings, or to spend on bigger drives considering this is a storage server.
  24. If you're scheduling this backup, then just create a basic user script to shut it down. If you know your backup runs @ 2am on Wednesday, and takes 2 hours....then use the User script function to create a cronjob to give the shutdown command @ 5am. Then to turn it back on, assuming your second unRAID box has a network adapter/mobo that supports WoL (Wake-on-LAN) then UnRAID also has etherwake included by default. So you could create another User Script to run 'etherwake <mac address>' at say 1:30am. I dont know if those old Inspirons support WoL. Alternatively, you could just leave the box on, and enable disk spin down so the hdd's will sleep 90% of the time.
  25. FYI you can add VMware to the SCVMM fabric Keep in mind Clusters have to be the same technology to be in the same cluster, for supportings DRS/HA/EVC vMotion etc... That is to say...if your existing servers are Intel Xeons, and you want to add a new host into a cluster, it also needs to be an Intel Xeon. You can use VMWare vCenter Converter. You can convert physical machines (P2V) or machines from other platforms like Hyper-V (V2V). You can even convert from VMware workstation & fusion. In many cases you can also do a live migration using this tool. Yup ESXi has snapshot and can Quiesce machines if you install VMware Tools. I assume by vSphere license, youre talking about the vSphere suite (vCenter, vRealize, etc...)? ESXi does SNMP so you could do monitoring via that with something like Grafana or Nagios. Management though you'll probably have a harder time with. You could use scripts through PowerCLI , depending on what you can find and your experience with Powershell scripting FYI; if you get a VMUG Advantage membership which is $200/year, you get a 12 month license for vSphere, essentially giving you everything for a cheap cost. https://www.vmug.com/membership/vmug-advantage-membership It depends on what you're doing, but a lot of infrastructure is moving towards IaaS. So unless you're going to be working for one of these companies that provides the actual IaaS infrastructure, then I wouldn't focus to much energy on this. I'd be looking more at Azure & AWS. I certainly wouldn't be doing more than VCP-DCV or 70-652 unless you're going to specifically be working in an IaaS or dedicated compute environment. I do still currently support some VMware infrastructure, most of it being our in house platforms for our wider business. (We're a communication company like Comcast or AT&T but we do managed services for thousands of businesses, and full ICT solutions for lots of large Enterprise companies). The majority of our customers however are now hosted in IaaS with a company that we own (which our Cloud Compute engineer team look after), or are hosted on Azure or AWS etc... I assume by System Administrator, you're referring to the full infrastructure. So depending on the scope of your customers, you may find yourself working with Windows Server (Active Directory/GPO, DNS/DHCP, etc..), Citrix, VPN, Database Clusters, Exchange, Skype/Teams, Sharepoint, etc.....if that's the case i'd be concentrating more on cloud services like Azure, Microsoft365 (InTune, Office365, etc..), Citrix Cloud, Trend Micro Cloud, and just cloud computing technologies in general. This is where a lot of jobs are starting to become more common and are technologies i'm focusing more on.
×