Jump to content

bANONYMOUS

Member
  • Posts

    118
  • Joined

  • Last visited

Everything posted by bANONYMOUS

  1. I mentioned this in the video when running the Geekbench 5 tests on Windows 11. You can see that it says there's a different power profile called Turbo that Windows 10 didn't have. So, the results are still valid as it's a clean install vs clean install, exactly how they come out of the box. The average user would get these results without tweaks or modifications.
  2. Are you using any software to control fan curves or performance? Cause I intentionally ran this without any of the Asus software for my laptop so it was 100% Windows controlled with no overclock. All factory stock specs
  3. I made a video doing a bunch of benchmarks comparing Windows 10 to Windows 11. In my tests, Windows 11 has an 18.75% faster boot time 3DMark has a 9.74% better score, 2.05% better clock speed, however the test was 7.08% hotter on the CPU and 2.57% hotter on the GPU. CrystalDiskMark I got a 15.03% better read speed, and a 4.41% better write speed. Geekbench 5, it shows a 9.04% better single core score, and a 15.59% better multi core score, while the clock speed was also 2.05% better, and actually ran 4.13% cooler on the CPU. EDIT: Higher clock speeds and temps are a result of a "Turbo" performance profile in Windows 11 that Windows 10 doesn't have from factory.
  4. Has anyone figured this out yet? It's really annoying turning my VPN off just to sign into messenger and then turn it back on and everything remains working. Facebook is legit just blocking a sign in because I'm using a VPN, really messed up. I also don't have a Facebook and I need one to contact their tech support for a solution to this issue. I made my messenger account long ago with my phone number, no associated email or Facebook account because that wasn't required back then so I can't sign in with an email that doesn't exist on that account. I get the same result on my phone as well, if I logout and try to log back in, it won't work with the VPN running, but as soon as I turn it off, I can sign in just fine, and with the VPN back on while already signed in, it remains working fine. But on my computer, I don't let the browser save anything, it wipes everything every time it closes so I have to sign back into messenger and this is driving me nuts. I'm about to bail on Facebook Messenger entirely because of this. EDIT: I tried downloading the desktop app on Windows and it tells me I'm not allowed to post, comment, or do other given things to help protect the community.. Apparently trying to sign into Messenger while using a VPN is against the Facebook protection guidelines now? Cause It says I have to wait due to spam, and then says to Learn More but I can't click it to see anything. So, I think Facebook is banning VPN's from being used with their services as part of their user agreement?, I'll give it a week for something to get fixed otherwise I'm switching over to Signal lol
  5. After installing Windows the second time, I guess I didn't change the boot properties and I was able to boot from a linux live usb just by having it plugged in and it booted from it as t he first boot priority, I accessed the Windows partition from within Linux, deleted the .tgz files from the Windows Desktop directory, rebooted, and there was a BIOS splash screen again, Windows loading screen again, got back into the Windows desktop, ethernet works now.. What the actual F.. This is legit 100% a virus from Google. The .tgz files from Google Takeout are 100% the root cause of all of this. The first time I downloaded the Google Takeout files the ethernet cuts out and they failed, I try to delete the uncompleted files and it locks up, wiped the BIOS and downgraded thinking that was the issue, didn't change anything, upgraded back to the latest version, reinstalled Windows, download the Google Takeout files again, ethernet cuts out exactly like it did before but I had wifi connected this time and the files completed, I extracted my data from them, and then tried to delete the .tgz files and it locks up the computer again. Was able to get into a live environment of Linux this time, deleted the .tgz files from Google Takeout, reboot, everything is fixed, everything works, Ethernet is back, BIOS is back, everything. This is legit some type of virus from Google.. spooky
  6. Yes I know, I'm saying it's gone from being accessed, the entire boot splash is gone, if I just let it boot up normally, it boots up to a black screen and just sits there, and the monitors don't come on at the login screen, there is no BIOS splash screen, no windows loading screen, nothing. If I hitting delete on boot while it's at the black screen, it just freezes the computer and it will stay at a black screen and nothing happens, I assumed at first that the BIOS isn't allowing video output with display port, so I tried with HDMI, still nothing, I tried hitting CTRL ATL DEL to reboot out of BIOS and that doesn't work either so it's legit frozen. If I shift+restart from within Windows and in the trouble shooting blue screen, try to access UEFI Options (which reboots into the BIOS), it just reboots the computer to the black screen, and after a few seconds, the login screen appears. If I install the Gigabyte BIOS software to control the BIOS from within Windows, I can see options, change options, do whatever, but nothing saves, when I reboot it's back to the old settings. there is no way to access the BIOS right now and this happened since disabling CSM again and removing that one M2M port from the RAID array. As for the current issue with the computer locking up, Windows is still deactivated and ethernet doesn't work at all. Just to test the new cable I plugged it into my laptop and it works fine, so the cable works, internet works, everything works until it's plugged into my desktop. And all of these issues start as soon as I try to delete those .tgz files from Google Takeout. Totally throwing this in the air, but this sounds like a Virus from Google, like, it's super unlikely, but this has Virus written ALL over it. Download a file and it disables ethernet, try to delete the file and it crashes the computer, every time I try to delete the files something else goes wrong. The only thing is this is a fresh install of Windows 10 Pro from Microsoft with nothing installed. AND I'm using the identical build and USB Installer on my laptop and it's working perfect so the download isn't corrupt, the USB installer isn't corrupt, laptop is running the identical Nvidia studio driver so that's not the issue. I also can't reboot into safe mode now, it just keeps rebooting back into the regular Windows desktop.
  7. Yes I know, I'm saying the BIOS is gone, to enter the BIOS on a Gigabyte Aorus Xtreme Z390, you hit Delete, and F12 is for boot options, I've had this motherboard for a little over 2 years now and have been in the BIOS many many times, what I'm saying is when I boot the computer, my monitors don't turn on, they stay black and when they do turn on, the computer is already at the login screen, there is no splash screen anymore that says Aorus, there's no loading wheel when it's loading windows, nothing. It just boots, black screen, login screen, desktop. I've also just recreated the entire thing and now I think this has something to do with Google. The rabbit hole continues. I downloaded my entire Google Photo's backup again with Google Takeout, and this is what happened before, the download failed because Ethernet cut out and the entire computer glitched out, and the downloads failed, I tried to delete the uncompleted files which are .tgz files from Google Takeout, and it locked up the computer, couldn't do anything. I connected to wifi this time cause I was running a new ethernet cable today to upgrade everything to cat6 so when I set it up I didn't have an ethernet cable in the studio anymore and when I was downloading everything from Google Takeout, I was on ethernet again, half way through downloading, computer glitches out and locks up for a second exactly like before, this time Wifi auto switched and the downloads continued, they worked, I have my entire Google Photo's Takeout. This is were it get's weird, I extract the .tar files from the .tgz files because Google compresses twice for some reason, once I had all of the .tar files, I tried to delete the .tgz files as they're not needed anymore, it happened again, computer completely locked up, only the browser works, but the entire computer is locked up, I can still browse the web, listen to music, etc, but I can't even open the start menu or move a file explorer window, can't even open task manager to force close explorer so it re opens, nothing, had to hard reboot. Now the plot thickens, I was going to try and reboot into recovery using shift+restart from the start menu, and it opened settings, and now my computer isn't activated anymore, and when I try to activate it says there's an error connecting to Microsoft. This is a clean install of Windows 10 Pro downloaded from Microsoft, made the USB installer using Rufus in GTP for UEFI, I have CSM disabled in BIOS, It's currently now a RAID 0 using only M2A and M2P M.2 ports as those are the direct PCIe lanes so now that SATA controlled M.2 connector isn't on this RAID and that NVMe isn't even formated, it's just blank space right now. I only did all of the Windows updates, installed the latest Nvidia studio driver, and these files from Google seem to be the root cause. I have the computer working again, and I wanted to test it out. I took some random files off my external to the desktop, some random files downloaded, some installers downloaded, saved everything to the desktop, rebooted (computer is still not activated now and still won't activate) I deleted each file one at a time, everything works right until I try to delete those .tgz files from Google Takeout, it locks up again. The root cause, and now repeated steps exact with the identical result here, when I try to delete the .tgz files from Google Takeout, it locks up my computer and causes havoc. Only issue now is I can't even get into the BIOS to reinstall Windows. Not entirely sure what to do here.
  8. So far everything has remained working, everything is good, however the BIOS is gone on boot, there's no splash screen, nothing, monitors stay at no signal and then it jumps straight to the Windows login screen. I tried mashing Delete on boot and it just locked up on boot and stays at a black screen. I tried Shift+restart to get into the trouble shooting power menu and rebooting to UEFI Settings and it just reboots the computer back into Windows. Tried installing the Gigabyte BIOS settings app and that also does nothing, so like, yes, the computer is working, but now the BIOS is gone and there's no way to access it. This motherboard is clearly haunted, does anyone know of a pastor that specializes in computers? Cause this thing clearly has a poltergeist and I'm 99% sure I need to perform an exorcism at this point.
  9. It's been working like this for just over a year now, all I did was wiped my computer to reinstall a fresh copy of Windows for a youtube video to demonstrate how to do something, and I updated to the latest BIOS before doing the clean install of Windows. I tried going back to the old BIOS but the problem seems to be there to stay. Also, after swapping NVMe's around, drive 3 is always the slow one no matter what I put in there, and just like I suspected. Drive 3, is M2M, that's the top M.2 slot that shares bandwidth with the SATA controller. So I don't know what the issue is, but Port M2M is slower than the other two, and I think that's what causing my issue with this RAID array and why it corrupted. I just set up a RAID0 with M2A and M2P and so far everything is working perfect, so I think this motherboard just sucks at RAID and for some reason it has three M.2 PCIe ports, but if you use them all you can't use RAID. It's one or the other and I think it only worked before because it was probably an older BIOS where it was setup before this issue was found and fixed in the later BIOS. So, my assumption and conclusion here, is I think that BIOS F8 (when the initial RAID was configured), there was a bug allowing you to make a RAID0 using all three M.2 slots, and now in BIOS F9j, I'm assuming it's been fixed, so now you can only use the two PCIe M.2 slots, and they probably disabled the M2M SATA slot from being used because of failed RAID arrays just like what happened to mine. It's probably how it's suppose to be and my computer was just working within a bug or something, I tried googling more about this and I actually found tons of stuff about people running two NVMe's, I haven't found a single setup thread about using all three together, so again, it seems like it was a bug and that's why it was working and now it's fixed in the latest release. Basically, no matter how I look at this, I just lost 1TB NVMe usable space on my boot drive so now I'm trying to find the cheapest reliable option for 2TB NVMe's lol. We'll see what happens in a day or two if this stays up and running or if it fails again
  10. I'm going to swap the NVMe's around on the motherboard and come back to report findings because I just tried formatting all three NVMe's again, make ext4 partitions, drives one and two both show ext4 guid, and they say linux partition, and drive three I was able to get formatted, made an ext4 partition, it says guid now, but it says it's a basic filesystem. I'm literally doing the exact same process for all three drives, RAID is completely disabled in BIOS and I booted off of this live USB with AHCI so there is legit nothing RAID at all active in BIOS so this is entirely drive related at this point or M.2 port allocation. So I tried formatting all the NVMe's again, and made NTFS partitions this time, all three work and all three are identical now, they all show the same everything. Ran a benchmark and this is where it gets interesting. Drive 1 - 3.5GB/s Read, 2.9GB/s Write Drive 2 - 3.5GB/s Read, 2.9GB/s Write Drive 3 - 2..8GB/s Read, 2.7GB/s Write Every drive is the same format, there is nothing on any drive, RAID is completely disabled, drive three is slower than the rest, so I'm going to take my entire computer apart, swap the NVMe's around and run this benchmark again to see if the slow drive changes port, or if the same port is slow. If the same port is slow, I think the motherboard just can't do NVMe RAID with three devices because that one M.2 port shares bandwidth with the SATA controller which is why SATA ports 5 and 6 get disabled if you run a drive in that one port. The part that doesn't make sense right now is why did this work for the last 6 months on Windows and now on the latest BIOS suddenly it doesn't work at all. Like was BIOS F9i just bad and didn't detect this issue? Or is this an issue with F9j? Cause even restoring back to F9i, or even F8, even tried the launch F3 BIOS because my 9900k is a P0 first release compatible with F3 where as the R0 revisions of the 9900k need BIOS F5 to run. and it's the same issue every single time which was leading me to believe this is a hardware issue and not BIOS related. But now I have no idea at all. Could be one failing drive that just didn't show up in CrystalDiskInfo, which is crazy considering these are only a year old and only have about 3000 hours of use on them so that's not a good look for Adata if that's the case, but also, it's only ONE drive and there's only ONE M.2 port that shares bandwidth with SATA and for some reason only ONE NVMe is showing as PCIe 2.0 in BIOS where as the other two are PCIe 3.0 like they should be. So it could be a bad drive, bad SATA controller, or from so many different BIOS flashes that now corrupted the EEPROM so now the BIOS flashes all have the existing issue that doesn't get overwritten when I flash a new BIOS (highly unlikely but a possibility). OR for some reason this motherboard just doesn't run that last M.2 port at 3.0 PCIe speed because it shares bandwidth with the SATA controller, and it's just been working on it's own for some magical reason this entire time and it was never suppose to work and now that I updated everything and restored all defaults that the glitch that was causing it magically work is now corrected and this issue should have just been there from the start.. Not entirely sure yet but this is becoming quite the rabbit hole
  11. I decompiled the raid entirely, I turned off the RST Controller, I booted into a live USB of linux to check the NVMe's and I have the HDD unhooked so there is only three NVMe's active and nothing else. I checked in Disk's in Linux and two NVMe's are showing as NTFS and the third is just unallocated space with no format which I thought was really weird, I just formatted all three drives to test them as ext4 for the entire drives and I'm going to check to see if anything is wrong with them separately because there is now nothing RAID configured in the BIOS so this should rule out any drive failure that CrystalDiskInfo didn't see on Windows. One thing I noticed so far is that the one drive that was coming up in Linux as unallocated space isn't showing a partition table. The two that were showing up at NTFS are GUID, and the unallocated doesn't have a partition table. I formatted all three at ext4, yet that third drive is still not showing a partition table, I'm starting to suspect this might be the SATA controller and not the RAID controller because one of the M.2 slots disables SATA 5 and 6.
  12. RAID0 because 1TB NVMe's were on sale when I bought them so it was cheaper to buy three 1TB NVMe's instead of a single 2TB NVMe, and it's just for ease of use so everything is on one drive, and with the reliability of NVMe's I've never had issues running RAID before. My laptop which is a Zephyrus Duo comes with two 1TB NVMe's in RAID 0 from factory. The idea is to just have everything on one "drive" according to Windows so there is no confusion between which drive has what installed or what programs//files are on what, everything is on one and it's just a lot simpler to operate when it's all working, which with my laptop it has been flawless and running this setup on my desktop with Linux was also flawless but that was a software RAID0 with madam. As far as I know you can't do a software RAID with Windows that the actual OS is installed to so I have to use a hardware RAID through BIOS. It was working fine but I had this idea that I wanted to start investing in cloud storage and get rid of my mechanical hard drives all together. So I backed everything up to an external drive, wiped everything, and when I was setting it all up it was fine, but then during normal use issues started persisting so I think it was a corrupt RAID because CSM was turned back on, but the RAID0 was already configured before resetting the BIOS. So RAID0 configured with CSM off, Updated BIOS to latest revision, I think I was on F9i before, and now it's F9j, reset BIOS default settings, CSM was turned back on by default, didn't notice, installed Windows on the already existing RAID array that was set up, and over time CSM being enabled made the BIOS corrupt the RAID during use every time the NVMe's were accessed. That's my theory anyway but as to why one NVMe is reading as PCIe 2.0 and one NVMe is missing entirely now, I'm not too sure. I still have a hunch that the RAID controller is dead or currently dying.
  13. Okay no there's more issues, now that I have the BIOS on the latest F9j, where I was before, I restored optimized defaults just to get rid of any changes so it's right out of the box default and I can adjust from there for diagnostics sake. all I've done so far from default is disable CSM, saved and rebooted into BIOS again, changed the three PCIe storage devices which was port 9, 17, and 21 for my NVMe's, all of which are PCIe 3.0 devices. They are all RST Controlled, saved, rebooted, go to Intel Rapid Storage Technology to manually set up a new RAID, and my old RAID actually comes up for some reason so I guess it does still exist, but it says failed, and for some reason, only two NVMe's are showing up as options for RAID. So, now, I have all three NVMe's set to RST Controlled, they all come up and are seen normally, but in the RAID config window, it's only showing two, and one is PCIe 3.0, and the other is PCIe 2.0.. So again, I think the raid controller might be dead, this is super glitchy and nothing makes sense right now, it's all over the place with issues.
  14. I figured out the NVMe RAID config again, CSM was enabled again by default when I downgraded and upgraded BIOS versions so that's resolved, now when I go to SATA And RST Configuration it only shows the PCIe storage for the actual NVMe ports being used so the three actual devices come up now and can make an NVMe RAID again, however that one menu on the BIOS is still SUPER slow, like it legit takes multiple seconds just go scroll down from one selection to the next, so something is really wrong with the BIOS and it did this in older versions as well so something is still messed up here and I'm not entirely sure what's going on. However, I'm thinking that this might be the issue I was facing in Windows because of CSM was enabled after updating the BIOS when I did the clean install of Windows on the old RAID config that was already established, that should mean that CSM was causing the BIOS RAID to glitch out when I try to access the NVMe's and over time and time it eventually corrupted the RAID which also explains why it kept getting worse and worse the more times I would try to use it. So I think so far, I should be able to reinstall Windows, and I just have to wait this out to see if the issue persists again after using it for awhile, but there is still something wrong with the BIOS, this one menu is SUPER slow, it's only the RAID config menu, the rest of the BIOS is super fast, it's as soon as I open this window, it's like the integrated RAID controller on the motherboard is failing or something. Or possibly CPU/RAM failure but I can't test that fully without a confirmed working motherboard. Any suggestions on any further testing I can do would be appreciated. I'm literally just figuring it all out as I go so I'm willing to give anything a shot at this point.
  15. Specs: i9-9900k stock clock, gigabyte aorus xtreme x390 BIOS F9j, 32GB (4x8GB) Corsair Vengeance RGB Pro, 3x ADATA XPG Gammix S11 Pro NVMe RAID0 in BIOS, 3x 8TB HDD Currently not hooked up for diagnostics right now. 1200w Corsair HX1200, EVGA FTW3 GTX 1080 ti. Currently running a clean install of Windows 10 from Microsoft that I just installed lastnight, using the same build and USB installer on my laptop right now and that's working fine, so it's not a bad install or Windows USB creation. So as of right now, my desktop and laptop are the same in terms of software, clean install, only the windows update drivers installed, and the latest studio driver from Nvidia. Laptop works perfect, desktop, for whatever reason, just started locking up today for no reason. What I've been doing is downloading everything from Google Photo's so I can switch over my cloud storage to be only business related, and I'm going to use OneDrive for everything personal. So all I've been doing is downloading the Google Takeout of Google Photos. It compiled my entire Google Photo's into 10 ~50GB files, which I was downloading, one finished, and then the other 9 failed while I was watching an LTT video and while watching the LTT video on Youtube, it just locked up as of the GPU failed (screen stopped moving, artifacts, and sound glitching) then after a few seconds, it all went back to normal as if nothing happened but all of the downloads failed. So I tried opening file explorer to delete the interrupted downloads, and it locked up and the entire computer crashed. I tried rebooting, back into windows just fine, tried to delete those uncompleted downloads, locked up again, tried in safe mode, locked up. I finally was able to delete them one at a time, and now if I empty the recycle bin, it locks up the computer again. It's now to the point that I can use my computer completely fine for how longer I want, but the second I open file explorer, it locks up. Yet I can operate everything normally if the files and programs are on my desktop. I have no idea why this is happening but I tried downgrading my BIOS to F8 and that did nothing so I flashed back to F9j and everything is still fine except my RAID array was decompiled and for some reason it's now showing I have 24 unknown PCIe storage devices so I can't even find the NVMe's to put them back into RAID0. I looked into my processor, because there is the P0 and R0 9900K, mine is a P0 so it's supported on BIOS F3 (launch BIOS) and that still has the issue with PCIe devices, it also scrolls extremely slow for some reason, like it takes a solid 3-5 seconds between selection just to scroll down through the PCIe devices so this is either something wrong with the motherboard, M.2 ports, or the NVMe's, but I don't understand why this just happened over night while it was just downloading stuff from Google Photo's. I can't even setup the RAID again to see if I can recover that Windows install, or at the very least just do a clean install again on the NVMe's in RAID0 because every device is labeled "PCIe Storage Dev On Port X" and the ports are 1 through 24, there is no other info on any ports or what they are so I have no idea which of these 24 are the three NVMe's, and all I can do is change them from "Not RST Controlled" to "RST Controlled" There's also no info at all under NVMe Configuration that tells me which PCIe ports the NVMe's are using. So as of right now, I have a computer that boots into the BIOS, I can't do anything else, my RAID is gone, I can't configure a new RAID because it now says I have 24 PCIe devices, it doesn't tell me which ones they are so I don't know which ones to assign RST Controlled so I can configure a new RAID. and there's no info anywhere else in the BIOS to tell me which PCIe ports the NVMe's are assigned to. So that's basically where I'm at right now. I also just lost everything from Google Drive as well because I already downloaded everything and it was all copying over to the HDD for archive when everything messed up on it's own lastnight while I was sleeping. So my entire Google Drive backup I downloaded is gone locally, and from Google Drive, so that's a nice touch as well lol Also just some more info, I'm not running XMP for diagnostics just to take anything out of the situation that could cause issues, and I bought these NVMe's in April 2020, so they're legit one year old now so I don't think it's a failed drive. I checked the drives in CrystalDiskInfo, nothing came up as an issue, each drive shares a little over 3000 hours and with the RAID benchmarked in CrystalDiskMark they get just over 3500MB/s Read and a little over 3000MB/s Write. My HDD's have over 12,000 hours use and operate completely fine so I'm suspecting a BIOS or motherboard issue and not a failed NVMe.
  16. I'm single now, so I'm going to try and look into this iMac again. It's been almost a year since I made this thread. Since posting this I did figure out a "solution" but it wasn't a very great one. Basically to put this blunt, you CAN'T run Windows 10 on this model of iMac, it's an issue with the GPU. That's probably why Apple only officially supports Windows 8.1 as being the highest supported OS for Bootcamp. So I just ended up wiping this and reinstalling macOS 10.13.6 and she used it like that to finish her degree, now that I have it back and she's gone. I need to get my mind off crap so I'm going to take another look at making this thing cool. This is what I figured out so far. - Windows 10 will not adjust brightness or wake from sleep with the AMD 6770M or 6970M - This model of iMac CAN have the GPU upgraded to a 780M or 880M (and some other's but those are the two most stable options) - If I can find a good price on a 780M or hopefully an 880M, I just need to find the three pipe heat sink from the AMD 6970M model 2011 iMac because of the - My model having the AMD 6770M has the two pipe 35w TDP heat sink - The AMD 6970M model has the three pipe 75w TDP heat sink - The 780M and 880M are both 122W TDP chips - Other reports online have successfully installed the 780M and 880M with the 6970M cooler using better thermal compound without issues - The 780M and 880M just need a BIOS flash to be supported on the iMac and they are out of the box functional, everything works like it was OEM. Since I'll be in there upgrading the GPU to one of those models, depending what's cheaper. I'll have the entire motherboard out anyway, so I might as well upgrade the CPU. - The i5-2500S can be upgraded to an i7-2600 as it was officially supported and all of the CPU heat sinks for this model are the same so I won't have issues there. - The i7-2600 is literally a direct plug and play upgrade, no flashing, no tweaks, just drop it in, use some quality thermal compound and it'll actually be better than factory. - I did find that the Xeon E3 1275 V1 is almost identical, both Sandy Bridge, both 3.4GHz base clock, both turbo to 3.8GHz, both 95W TDP - I just can't find anything so far as to whether or not the Xeon E3 1275 V1 will drop in without any issues just like the i7-2600 will. From benchmarks online, the Xeon E3 1275 actually seems to perform better than the i7-2600 due to better power management and improved iGPU efficiency On average the i7-2600 uses 114w and the Xeon E3 1275 uses 77w, as well as the Xeon having HD 3000 iGPU, where as the i7 has HD 2000 iGPU So now comes down to price. I found an i7-2600 on eBay for $110 CAD and the Xeon E3 1275 V1 for $75 CAD I found the 780M for $250 CAD and the 880M for $315 CAD Now these are my options - $325 for the Xeon with 780M - $360 for the i7 with 780M - $390 for the Xeon with 880M - $425 for the i7 with 880M My baseline upgrade here is the $360 for the i7/780M, because that's a guaranteed working upgrade, and I don't really want to spend an extra $65 to upgrade to the i7/880M. However if I can figure out if the Xeon will work without any issues (every function working, don't care if there's more work to making it work, it just needs to work) Then I would be looking at only a $30 upgrade to the Xeon/880M which has better performance from both the CPU and GPU for only $30.. sign me up for that. On a completely different note here, once it's been upgraded to either Nvidia GPU, it will support dosdude1's patcher so I can upgrade this to the latest macOS 11.00 as well as Windows 10 64-bit, run both on a 50/50 partitioned fusion drive so each OS would have a 1.25TB partition and this would actually be a pretty deadly computer. Oh and my build tally so far is that I bought the iMac for $100 CAD because it was quoted at a local certified Apple repair shop that it would be around $500 to replace the hard drive, plus the cost of the drive depending on which one we wanted to install, save money and downgrade, replace the factory capacity, or upgrade. So this is a $100 iMac, the SSD and HDD were free to me because they were just old ones that I removed from other builds and just kept around as spare parts. All of the cables and adapters for the Fusion Drive kit I put together myself using old spare parts I had laying around, and the 32GB of RAM was again just old parts I had laying around from broken laptops. The RAM kit I used was actually ordered for a customer, and when it arrived she said she just bought a new laptop so I was just stuck with the RAM but she gave me her old laptop as payment for the time I had already put into it. so I just put a cheaper RAM kit in the laptop and sold it second hand and made all of my money back on the new kit, so that was also free. Realistically, this is a $100 iMac that's going to be a pretty good computer for either $360 with the i7/780M, or even better with the $360 Xeon/880M upgrade. Either way, this is a sub $500 CAD iMac and in the end, I just need this project to get my mind off of everything.
  17. I've edited the first post to include a video I just made of my build, the video was shot before I made the custom GPU so I'll have to make another video of the final build one day
  18. Before we get into this, I just want to point credit where credit is due, so first off, I would like to thank Wendell from Level1Techs, without his Guide and Video, I wouldn't have even know this was possible, and I would have just spend over $600 CAD on new NVMe's for no reason if I wasn't able to get this working. (I guess it pays to do your research before buying hardware) Next I would like to thank the people on /r/linuxquestions and /r/pop_os who were helping me out though this entire process start to finish. We can't forget about our very own @jdfthetech who also just happened to be up all night and was able to give me some tips that got me back on track. And Finally, I want to thank everyone who contributes to the ArchWiki, without that place, I would have been absolutely lost and would have switched back to Windows Now, onto the Guide. First, you want to make a Pop_OS! 20.04 USB Installer Intel/AMD NVIDIA Once that's done, we want to configure your BIOS for a Linux install, if you haven't already. Change OS option if you have it to "other OS" Make sure SATA Mode is AHCI (it will not work otherwise) Disable Fastboot Save and Restart Boot to your POP_OS! 20.04 USB If you don't want to follow along with this guide, I made a tutorial video HERE (video is not finished yet, this is just a place holder) Once booted in the Pop_OS! USB Setup language, keyboard, and do not continue any further once it asks how to install. open terminal and let's elevate as root sudo -i Now check your devices to see which drives you're using. lsblk Write down the device names that you want for your RAID (you will need these a lot) sudo gdisk /dev/DEVICE_NAME First Drive You want this one to be your EFI partition and your for the RAID capacity So for the EFI partition, enter as followed as it asks n enter enter 1024M ef00 Next create the EXT4 Partition that will be used for the RAID n enter enter enter 8300 Write the changes w y Check for EFI Partition lsblk Format the EFI you just created from FAT16 to FAT32 mkfs.fat /dev/YOUR_DEVICE_PARTITION Continue partitioning every other drive as follows sudo gdisk /dev/EVERY_OTHER_DEVICE Make the Boot Partition on every drive n enter enter 1024M 8300 Make the rest of the drive capacity the EXT4 partition for your RAID n enter enter enter 8300 Write the changes w y Continue until all drives you want in your RAID are partitioned. Now we can make the RAID (0/1/5/6/10) X = RAID level Y = Number of drives total for the RAID mdadm --create /dev/md0 --verbose --level=X --raid-devices=Y /dev/YOUR_DEVICE_1_EXT4_PARTITION_2 /dev/YOUR_DEVICE_2_EXT4_PARTITION_2 Make note that you are adding the SECOND PARTITION of each drive for the RAID, not the devices itself, make sure to add the partition at the end and then continue adding to the end of that command for total number of every drive you're using in your RAID Check RAID status with cat /proc/mdstat If you selected anything other than RAID0, it will take awhile to build the volume. Keep checking with "cat /proc/mdstat" until it's done Once Completed, we can now create the partitions needed on the RAID sudo gdisk /dev/md0 We want to make a swap partition as our first partition on the new RAID volume, now keep in mind these don't need to be exact, but it's in good practice to stay with the rule of thumb for capacity needed to how much RAM you have I'm going to start this list off at 8GB of RAM, because if you have less than 8GB, you should probably be upgrading your RAM and not making a RAID boot setup lol First Number is how much RAM you HAVE, Second number is how much CAPACITY that the SWAP should be 8GB - 3GB 12GB - 3GB 16GB - 4GB 24GB - 5GB 32GB - 6GB 64GB - 8GB 128GB - 11GB For me, I have 32GB of RAM, 1GB is 1024MB, I need a 6GB SWAP, 1024 x 6, is 6144MB, so I'm going to be entering 6144MB, change your value to meet your specs n enter enter 6144M 8200 Now partition the rest of the RAID n enter enter enter 8300 Write the changes w y Now we can finally move to Pop_OS Installer and configure the partitions. Select Custom Install Select the EFI partition on FIRST drive by device number, they don't always appear in order Select /boot/efi Make sure the format is fat32 Select the BOOT partition that you made on the SECOND drive by device number, it's not always the second drive in the list) (sometimes won't be listed in order, check device number) Select Custom input into the box /boot Make sure the format is set to EXT4 Select the SMALL partition on the RAID array Select SWAP Select the LARGE partition on the RAID Array Select / (for Root) Make sure the format is EXT4 You may be able to use other formats, I have not tried and can not guarantee if it will work using this process. Now you can finally select the orange button at the bottom right and Install Pop_OS! (sometimes it will fail at the end, just ignore this) Once completed (or failed), go back into terminal and we need to mount the RAID sudo mount /dev/md0 /mnt Mount the Boot Partition sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot Mount the EFI partition sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi Now we can install mdadm on the RAID (may already be installed, but try anyway) cd /mnt sudo mount --bind /dev dev sudo mount --bind /proc proc sudo mount --bind /sys sys sudo chroot . apt install mdadm Now check mdadm configuration to make sure the RAID UUID is there /cat /etc/mdadm/mdadm.conf If it is not there, check the UUID manually mdadm --detail /dev/md0 Copy the UUID and now we can edit the mdadm.conf nano /etc/mdadm/mdadm.conf Under where it says "# definitions of existing MD arrays" Type in and paste your UUID ARRAY /dev/md/0 metadata=1.2 UUID=YOUR_RAID_UUID name=pop-os:0 CTRL X to Save Y Enter Now we need to update your changes update-initramfs -u Make sure it scans the changes mdadm --detail --scan >>/etc/mdadm/mdadm Tell the system it needs to boot the RAID (change the X to the level of RAID you are using) echo raidX >> /etc/modules Now just make sure the /boot and /boot/efi partitions are still mounted, mine unmounted at this point for some reason lsblk if you do not see anything that says /boot or /boot/efi, you need to remount them. Remount the Boot Partition sudo mount /dev/YOUR_SECOND_DEVICE_BOOT_PARTITION /boot Remount the EFI partition sudo mount /dev/YOUR_FIRST_DEVICE_EFI_PARTITION /boot/efi With /boot and /boot/efi remounted, we can finish it off lsinitramfs /boot/initrd.img |grep mdadm You need internet for this last step, so connect Wifi if you're not using Ethernet, and you should try to install grub2 in case it failed to install during the Pop_OS! Install. So the last step here is to install grub, and then update grub, even if it shows issues, it should be fine as the other files will be there from the Pop_OS! Installer apt install grub2-common -y (you may get an error, ignore this) update-grub2 Exit chroot by typing in exit Exit root by typing in exit Now reboot the computer poweroff --reboot Everything should go fine and you should now be booting into a clean install of Pop_OS! 20.04 on your new RAID Array operating just like a normal install. From this point on, you just need to remember to never touch the drives separately, if you ever have to enter commands and it's telling you to direct toward your boot drive, this will always be /dev/md0 (or whatever you called your RAID array), never use the devicenames we were using earlier to create the raid, if anything on those changes, it could corrupt the entire RAID resulting in full data loss. I hope this helps anyone who wants to setup RAID0 for a blazing fast boot drive to get the most FPS possible, or a safe and secure RAID1 for those with your mission critical files, or even a RAID5/6/10 for those who wanted a little of both. I started learning mdadm straight out of windows with very little Linux Knowledge at 6pm April 24th, it's now 6am April 26th. It has been 36 hours straight, and I slept at my desk for 4 hours, no one has any excuse for not being able to learn something. It's time I get some much needed sleep.
  19. Yeah I saw that, that was how I finally figured it all out and got it working. I just finished writing a full n00b guide on how to do this, I wnated to make sure I could repeat the process so I formatted everything again and did it over again a second time to make sure my process works for the guide and also made a video of the entire process that I need to edit now. I'm going to make a second thread for the guide so it's not cluttered with the posts about trying to figure stuff out.
  20. I couldn't sleep knowing I didn't figure it out, so I wiped my drives again and continued going, I finally got it with mdadm in EXT4 and it's running great, I ran a benchmark and I got 3.2GBps on the RAID, where on BTRFS I got 2.8, so it's a pretty big improvement using EXT4 over BTRFS, so defiantly worth my time to continue trying to figure out mdadm. Thanks for the help, I had to come back to your post like 42 times to reread what you suggested but it's up and running flawless now
  21. The Arch wiki was just the best source for information, I used to be a distributor for Pop_OS! when I was a computer department manager awhile back so I just figured I would start there as it's basically as simple as it gets, it's pretty much Ubuntu without the corruption, and it has built in Nvidia drivers. I did end up figuring it out, well, figured something out. I found this video on Youtube of someone showing almost exactly what I wanted to do with this setup. He does a RAID1 setup, so I just changed the commands when it says RAID1 and entered RAID5, and where he allocates two drives, I just repeated the step for three drives and everything is working perfect. It uses btrfs, which I wasn't prepared for, but it works, it's stupid fast, has snapshots to fall back to if I destroy something and I allocated a backup to each of the drives so if one fails, I literally just need to swap it out, it'll rebuild the failed partitions and I can keep on going, mission accomplished. I have no idea what btrfs is, or how all of this setup works, but I now have a fully functional NVMe RAID5 that boots Pop_OS! 20.04 perfectly and it is fast as all hell. I would have preferred to use the traditional mdadm and Ext4 just as it seems to be the staple in how to do this, so learning the base on getting this working would have been idea for growth, but this setup with btrfs is working fantastic. I'm not done with this yet though, I'm going to attempt to put together an old computer with parts I have laying around and keep messing around with madam until I learn how it works.
  22. When you specified the size as 512M, how does this work? When I type in lsblk, it shows my three non-partitioned NVME's as nvme0n1, nvme1n1, and nvme2n1. So I type in sudo gdisk /dev/nvme0n1 There's no partitions so I just hit n From there it asks the partition number, so I hit 1 to make the first partition And then it asks for the first sector and I'm lost again, I'm trying to figure this out on the Arch wiki but my brain is literally mush right now, I've been trying to figure this out since 6pm here, it's almost 3am now and I still haven't figure out how to make the partitions. For the first sector, it says default = 2048, but it doesn't say 2048 what, like, Bytes?, Kilobytes? Megabytes? Raccoons? I have no idea what to go off of there. And then it says or +/- KMGTP which I'm assuming means Kilo/Mega/Giga/Tera/Petabytes, but what is the +/-? From what I'm understanding from the Arch wiki, lets say to make this 512mb efi partition, when it asks for the first sector, I'm assuming I just hit enter since this is the first partition, so I'll just let it pick it's default starting sector point of 2048..raccoon.. or whatever element it's telling me that 2048 means, and then for the last sector, I would pick my partition size of 512M? So if that's correct, and I do that, when I'm making the second partition, will it deduct the first partition amount, like is math involved? or does it figure it out on it's own? Let's say when I make the next partition and it asks the same thing, do I have to input the first sector from where I left off and say to start at 512mb, or is it smart enough to figure that out on it's own that there is already 512mb allocated to partition 1, so if I just pick the default, it will just start the second partition from where the first ends at 512mb? Then as for the last sector for the second partition, if I wanted a 2gb boot partition, would I enter 2gb? Or do I have to add the 512mb from the first partition to the last sector on the second partition? (making the last sector of the second partition 2.5gb instead to account for the 512mb on the first partition) Like lets say I have a basket of that holds 10 apples, my starting sector is apple 0, and I want my efi to be 2 apples big, so my last sector is 2 apples, when I do the next partition, if I tell it I want the next partition to be 5 apples, it is going to make the next sector end at 5 apples capacity (leaving 5 apple sectors available for the total 10 apple sectors) or is it going to know that 2 apples are already pallocated, and it's going to add 5 more, allocating 7 apples total of the 10 available sectors? There's a lot of information about this on the Arch wiki, but I'm literally diving into this straight vanilla off the n00b train, it's like trying to do something you don't know how to do in a different language, not only do I have no idea what to do, but I have no idea how to do it, and even if I knew what it was I need to do, I can't speak the language to ask in the correct terms to know what is it to even ask help for, It's defiantly making for an interesting night, that's for sure.
  23. I just switched fully from Windows to Linux, I was running a single 512GB NVMe as my boot when I switched, and everything was "fine", I was having some issues writing to my NTFS SSD's and HDD's, and one of my new HDD not coming up at all, after figuring some stuff out, I learned that when using NVMe, it disables the last two SATA ports on the motherboard, so I removed my SSD's entirely, replaced with NVMe's and replaced my boot drive with a new NVMe as well, so now there are three 1tb NVMe's, and moved the three 8TB HDD's down the SATA ports so now everything recognizes and we're all good. I tried making a RAID5 in BIOS just like I normally would and it turns out that linux will not recognize NVMe if the SATA mode is set to RAID, it MUST be in AHCI, so software RAID is my only option, found out mdadm is the go to option everyone seems to recommend for it's stability which seems to be exactly what I want from what I found so far. That's it, this is where I'm at, been looking at tons of guides and tutorials for mdadm and I just can't figure it out. I found a bunch of really poor youtube videos in different languages or really slow progression where it takes them multiple seconds in silence to type in the next command, so basically really bad sources for information and I can't find just one solid guide, start to finish, "this is how to configure NVMe's in RAID for a linux install" boom, done, but nothing like that seems to exist. So far the most helpful information I was able to find was from Wendell at Level1Techs where he wrote a "guide" but skipped the entire first part as I'm assuming this was written for the more advanced users, and considering my last install just worked out of the box because it was to a single NVMe, it just worked, and I didn't have to make manual partitions or anything, it was all GUI, just a few clicks and it was ready to go, so naturally, I'm completely lost right now. Mostly what I need help with right now is from Wendell's post. How do I make the first partition? Is it all done in terminal, can I use "Disks" to make this partition, which NVMe is this done to, all of them? Just one? Just the first one? How do I mark the partition efi? Does he mean the partition name needs to be efi? Or is this a property option for formatting? Also how? exact same over thinking process occurred for me, I don't know what to do first or where to even start this process. Once I get passed those two steps, I should be able to follow his guide further and try to figure this out, and if I get it working, probably write a more simplistic guide for the n00b into RAID. I really thought there would have been way more information about this as this is pretty much step one into switching to Linux if you use a RAID array. Like let's hypothetically say that if someone has been using a Hardware RAID with a Windows install as their boot drive for years, never has issues, super simple, wants to switch to Linux, decides to try it out, and then suddenly this process hits them in the face, I can see why it might steer people away from Linux when this is so brutally simple on Windows. I would honestly be switching back to Windows right now if I didn't already run benchmarks and see how much faster Ext4 was compared to NTFS, and how idle system resources are 0% on Linux, it's just so much better for my computer/hydro bill, and there's just so much more unlocked performance with Linux right out of the box, I just don't want to go back to Windows now that I know what I'm missing out on, so I'm willing to pull my hair out all night and try to learn how this works. So, if anyone has some suggestions or commands for achieving the things I'm having issues with currently. That would be awesome.
  24. Oh ..my god I need to see this become a reality, it's really for the greater good of the universe that this becomes a real thing
  25. I love builds like this, I was just getting invested and now we're here, can't wait for the next update
×