Search the Community
Showing results for tags 'proxmox ve'.
-
Introduction Hey everyone! So I am fairly new to self hosting stuff at home, I am talking about 2-3 years here. I have bought my first Raspberry Pi about 3 years from now and my small home server 1.5 years ago. I first ran TrueNAS Scale as the OS, but switched to Proxmox VE, because I had issues with accessing other pods hosted on the same issue. Talked about this issue with one of my teachers back then and he advised me to install Proxmox instead of trying to fiddle around with IP tables and stuff. Since then I am a huge fan of PVE! Alright, so I then also started hosting other VMs on there, because I needed the unused computing power that I used for the pods on TrueNas Scale again. I started migrating some of my services onto an Ubtuntu-Server Docker instance. It's been like that since that point. Recently I started experiencing OpenShift at work and CI/CD pipelines with Tekton and wanted to also do something like that privately to strengthen my knowledge about k8s. I started researching and found the underlying project of OpenShift; OKD. Long story short; I checked the minimum requirements for OKD and realized that I am short of a few GB of RAM... My Goal I would like you to quickly take a look at the table 1 and 2 on the following official documentation: https://docs.okd.io/4.14/installing/installing_bare_metal/installing-bare-metal.html#installation-machine-requirements_installing-bare-metal For my cluster, I would need 3 control plane nodes and at least 2 workers. That would be a total of 14 CPU-Cores and 64 GB of RAM. I also have other VMs running on my system, that I have listed below this section. So there needs to be slightly better hardware, just to be safe. Current Specs Case: Fractal Define 7 XL Black CPU: AMD Ryzen 5 5600 G, AM4, 3.9 GHz, 6-Core RAM: Corsair Vengeance LPX, 4x 16GB, 3200 MHz, DDR4-RAM, DIMM Motherboard: ASUS ROG STRIX B550-A Gaming, AM4, AMD B550, ATX PSU: be quiet! Pure Power 11 CM, 600W Storage: - 2x Intenso top performance 512 GB, M.2 2280 - 2x Samsung 870 QVO, 1000 GB, 2.5'' - 6x Seagate IronWolf 8TB, 3.5'', CMR Other: - 2x Delock Host Bus Adapter 2 Port SATA PCIe - 3x Delock Molex to SATA power cable Virtual Machines NAS CPU: 1 Core [host] RAM: 8GiB [balloon=0] Storage: - 6x HDD pass-through - local-zfs (onto m.2) 32G virt. disk Docker Services (Ubtuntu Server) CPU: 3 Cores [x86-64-v2-AES] RAM: 14GiB Storage: local-zfs (onto m.2) 128G virt. disk Adguard (Ubtuntu Server) CPU: 1 Core [x86-64-v2-AES] RAM: 2GiB Storage: local-zfs (onto m.2) 32G virt. disk (The SSDs are currently unused, because I wanted to put the cluster onto those. Distributing the control planes and worker nodes onto them, having no ZFS pool for possibly a better performance?) Issues 1. My system was experiencing a high I/O rate, peaking at around 15-20% when doing high intensive writes. I had some MC servers hosted on there at some point, with large worlds and when they saved, all VMs were lagging to their death (almost). This was the case when I had the SSD1 and SSD2 as a RAID1 and I read that this can cause huge lag (possibly from ChatGPT, I cannot remember). Well so yesterday I migrated away from RAID1, also because I need more storage for my OKD cluster. 2. Another issue is that I had to buy those PCIe SATA adapters, because I didn't know that 2 of my 6 SATA motherboard connectors are going to get disabled, when using 2x m.2 slots. (I decided later that I am going to buy SSDs for my system). 3. Yesterday, a friend of mine pointed out, that my CPU only has 2 lanes and that this is probably going to be a bottleneck for my RAM. He advised me to do some more research about this topic though and as you can see I also don't know that much about hardware. 4. About the RAM issue, I would actually buy 2x 32 GB sticks here, and replace 2 of my 4 16GB sticks with them. 5. If I understood Proxmox VE correctly, it emulates certain CPU types, so the operating system would think that it has 4 cores, if I for example give it 1 socket with 4 x86-64-v2-AES cores. Also I still don't quite understand that part yet, with what CPU type to choose. Maybe I can get some advice here as well. 6. I actually want to give my NAS VM way more RAM, for faster caching, but it's not necessary really. Conclusion I would be very happy if I could get some advise about my current situation and maybe if you could address some other concerns that I maybe haven't stumbled into yet. I am willing to spend around 500$ for my upgrades (or more maybe, but I'll have to save up some more money then). I've also thought of maybe moving the NAS onto another Proxmox system (including the VM with the Docker services), but this will be more expensive. But this makes the system available for the cluster (with a small RAM upgrade).
- 7 replies
-
- kubernetes
- hosting
-
(and 3 more)
Tagged with:
-
Hey there, My Server is using Proxmox as Hypervisor (KVM backend) to spawn VMs. I wanna use the HAB (specs) to passthrough my 4HDDs to a TrueNAS Core VM. In order to do that, I first enabled the IOMMU support in the BIOS/UEFI. After that, I booted into Proxmox and checked whether IOMMU is detected. It spitted out the following output [ 1.332679] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported [ 1.344441] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40 [ 1.345378] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank). which in my mind means that IOMMU is already enabled, even if I never configured it in the GRUB cmdline. I also checked the IOMMU groups, to validate the HBA is in its own separate group. [...] IOMMU Group 25 2d:00.0 Serial Attached SCSI controller [0107]: LSI Logic / Symbios Logic SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05) [...] And in fact, it is in its own dedicated IOMMU group. Did someone encounter this before, or is my understanding of IOMMU at some point wrong ? Server Specs: - AsRockRack X570D4U Mainboard - AMD Ryzen 5 3600 - x2 16GB Crucial Micron ECC RAM - x2 Seagate Ironwolf 4TB - x2 WD RED 4TB - LSI Logic / Symbios Logic SAS2308 HBA
- 5 replies
-
- proxmox
- proxmox ve
-
(and 3 more)
Tagged with:
-
I am trying to set up a Valheim server using Proxmox for me and my friends to do a hecking vike on. This is proving challenging. I have Proxmox up and running, and have a container running debian-11-turnkey-gamersserver_17.1-1_amd64.tar.gz that, as far as I can tell, is up and running and has the Valheim server files installed. I don't know how to connect to the server from a game, set up password protection for the server, etc. and have had a hard time with the documentation for the turnkey server to the point that I'm looking for help here. Why did I choose Proxmox? This is intended to be a multi-use server, and this is the first of 4-5 containers I intend to have running eventually, and what research I did pointed to Proxmox being the most flexible way to do that, despite a steeper learning curve. Server information is as follows: OS: Proxmox VE 8.0-2 CPU: AMD Ryzen 3 3300X Mobo: Asus ROG STRIX B550-I GAMING Memory: 2x 16GB G.Skill Ripjaws V GPU: ASRock Intel Arc A380 Challenger ITX PSU: SeaSonic FOCUS SPX (2021) 750W 80+ Platinum
-
Hello, I hope this is the correct place to post this. So I installed proxmox multiple times today. I dont have raid turned on (had at first). I am running a smaller ssd for proxmox, 1 nvme ssd and 2 1tb hdd's that I want to run in raid 1. My problem is that it seems proxmox is taking all the storage available on the machine so there is nothing left to make a ZFS or directory. On the install I pointed it to the smaller ssd. Can someone help me with this? I am completely lost with this. I have tried reformatting the drives that proxmox shouldn't use but that makes the machine not able to start.
-
im quite new to proxmox and i have no clue what went wrong. im using version 7.1-2 see screenshots for settings im using for the vm and for the output it gave. Thank you!
- 2 replies
-
- proxmox ve
- troubleshooting
-
(and 1 more)
Tagged with:
-
Hi, I have converted an older computer into a proxmox server that I tinker with, but i have a problem. I have tried to make several different minecraft servers on different OS:s, and they work fine locally, but when I try to access with my public IP and port it doesn't work, it just says "connection refused". I'm thankful for any help
-
Hi everyone! I'm trying to install Proxmox VE 7.1.2 on my new (to me) IBM server (X5 x3850), and I'm not able to get it to install properly. I took a video of one "loop" of the install process, as there's too much there to reasonably type in here on the forum. Does anyone have any suggestions/troubleshooting advice for how to make this work? I'm open to anything at this point, as so far all I have is a ~$750 paperweight (albeit a rather heavy one, as this thing weighs about 115lbs!). Just to bring everyone up to speed: I have already tried TrueNAS Core and TrueNAS Scale, as well as Proxmox VE (obviously). I have not tried Unraid yet, as I don't think it fits my use-case nearly as well as Proxmox (I want to do full virtualization, not just docker containers). I could be wrong though. Specific hardware specs: 4x Xeon E7-4870 (10c 20t each, 2.4Ghz base 2.8Ghz boost) 128GB DDR3 1067Mhz ECC (8x16GB) 8-port LSI SAS 6Gb/s Controller Card (L3-25121-79B) 200GB SLC 2.5" SAS SSD (Intended Proxmox Boot Drive) 5x 600GB 10kRPM 2.5" SAS HDD 4-port Sun Oracle Gigabit network card (511-1422-01) 80GB FusionIO SLC Cache SSD (PCIe)
- 2 replies
-
- proxmox ve
- ibm server
-
(and 1 more)
Tagged with:
-
Hey Guys, im currently trying to install proxmox to my system. After booting from a USB-Drive I was greeted with the Proxmox installer, after selecting "Install Proxmox VE" it started testing my devices for a valid ISO after a few retries it spit out an error that said "[ERROR] no device with valid ISO found, please check your installation medium unable to continue (type exit or CTRL-D to reboot)". Does anyone have an idea what could fix my problem or what is causing it? I fixed it by selecting the "DD-Mode" in rufus while creating the bootable USB.
- 2 replies
-
- proxmox ve
- proxmox
- (and 4 more)
-
Hello! I have installed proxmox on my Dell R720, HP DL380P Gen8, and my intel R1304JP4OC. The problem I have is that the Dell and HP gets the error "Could not activate storage 'local-zfs', zfs error: cannot import 'rpool': no such pool available (500)". The Intel servers runs with software Raid, and the other two are locked to hardware Raid. I have tried to update initramfs according to a post on the proxmox forum, but it won't work. I can use a workaround by putting the boot drive and the rest of the storage as two seperate disks in the raid configuration, but this makes me unable to use 4 of the 16 disks that the servers got (2x8). Is it possible to fix this? Thanks in advance /Erik
-
Preface / Plan: The reasoning behind this is that we run MacBooks and we both miss gaming so my thinking is that we can build a central gaming "server" and run 1 gaming vm each as well as get a central storage solution and game server hosting. The plan is to accumulate parts over time since we are in no rush and GPUs arn't available but i ran in to a small issue / oversight on my part, since the plan is use Proxmox and run GPU passthrough and a Ryzen cpu there is gpu left for Proxmox and the console. My questions to you are the following: 1. Will "GPU 3" be sufficient for console duty? will it work with the limited amount of PCI lanes? 2. Is the PSU sufficient? i know i cant go for the top of the line GPU's from Nvidia because of the power surges and thats not my goal we just want to be able to enjoy games without screaming laptops and lagg. 3. Will i be able to run a m2 boot drive or should i go for SATA since we will be using 3 GPUs? Budget (including currency): 30 000 SEK / 3 000 USD ish Country: Sweden Games, programs or workloads that it will be used for: Starbound, Civ 6, Cities, minecraft, Proxmox, dev vms Other details : Motherboard: ASUS ProArt B550-CREATOR CPU: AMD Ryzen 9 5950X CPU Cooler: Noctua NHD15 Memory: 64GB Non buffered kingston ECC Case: Corsair Crystal 680X GPU 1: To be decided (leaning towards a 3060 or 6600) GPU 2: AMD RX 580 (we already own this) GPU 3: ASUS GeForce GT710 1GB (for Proxmox) Boot SDD: 250GB Samsung M2 if possible otherwise a samsung 2,5 sata PSU: corsair 1000w should do the trick Mass Storage: 3x Seagate 2TB raid 5 Router: Cisco RV345 Hypervisor: Proxmox VE
-
Hello all, and thanks in advance for any suggestions. Back in October I built myself a proxmox server using an I9 11900K on an ASUS z590-A gaming Wi-Fi, 128Gb oloy 3200 cl16 ram, 1TB intel 960 SSD, in a supermicro super chassis. I also have an Adaptec 2274400-R raid card with 10, 6Tb Seagate SAS HDDs in raid 10. Proxmox installed flawlessly other than an issue with the network interface sometimes being lost at boot, which I was able to fix right away. The server was working flawlessly for what I needed from it and was not giving me any issues. I backed up the installation SSD to another SSD and cold stored it in case I ever ran into issues. Before cold storing it, I validated that it was working before storing it. At the beginning of march, I noticed that the system was becoming unresponsive after being on 24/7 since the new year. According to the logs it had installed some updates automatically a few days earlier. I did a restart to the system hoping this would fix my issues, unfortunately it would not reboot on me once rebooting; giving the error “Volume group “PVE” not found. Cannot process volume group PVE”. Upon seeing this I tried changing over to my cold stored SSD, hoping that the issue was either with the SSD or the installation. Unfortunately, this would give me the same result, but I started to get another issue, I would get a “no suitable video mode found. Booting in blind mode”, on either SSD. I looked up the “no suitable video mode” error and found that sometimes you need to set the video device in the bios to the CPU integrated as the default device. This did nothing for me than give me the same 2 issues. I decided then to try and reinstall the OS, but this again gave me some issues. I started off getting the “no suitable video mode found. Booting in blind mode” error, and after a few cold boots I could get it to actually start an installer. Upon booting into the installer I would get a few different issues, “VFS: unable to mount root fs on unknown-block(1,0)”, “kernel panic not syncing: attempted to kill init”, “Kernel panic – not syncing: fatal exception in interrupt Kernel offset: 0x2d000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff) –[end kernel panic – not syncing: fatal exception in interrupt]---“. After seeing all these issues, I thought it might be my installation medium, I updated my iso from proxmox-ve_7.1-1 to proxmox-ve_7.1-2 and tried 4 different iso to USB burning tools on 3 different flash drives: all producing the same errors. I decided to try and do a clean Debian install in case there was an issue with the proxmox image and again no boot and the same issues coming up. I then thought, could the kernel panic be a ram issue, so I ran memtest86. All memory reported good after running 4 passes on each stick individually and all together. At this point I tried to do a bios update in case it was I bios issue, again with really no success. Occasionally upon a cold boot afterwards I would be able to get into the proxmox installer without any issues but when I get to the select network interface window and it would not see any NIC and would lock up at this point. At this point I am stumped at what to do since nothing I do seems to work to get the system working again. Any additional diagnostics steps I should try to get it installed would be greatly appreciated. I can take photos of the screen during boot if anyone wants to try and see more of what happens upon request. Thanks!
-
Is there a way in proxmox to disable the https certs or enable http? Im trying to let all the traffic work via my webproxy. But is does not allow it for some reason. I can not figure out why. I also tried making the proxy without certs and using the standard proxmox certs, but that had the same problem.
-
- web
- proxy server
-
(and 4 more)
Tagged with:
-
Hello, I have a question: "is proxmox VE a reliable piece of software? Or is there other open source virtualization software that I can directly access with a ssh connection and a web interface to make your virtualmachines?". please let me now, I intend to fully virtualize 2 servers.
- 2 replies
-
- server
- proxmox ve
-
(and 4 more)
Tagged with:
-
Greetings, Currently I am running a consumer pc recently replaced in this (Inter-tech IPC 4U-4129-N) server case. It has 1x 8tb hdd, 1x 1tb hdd. 1x 248gb boot ssd, 8gb ram and an FX6350. It is as of now baremetal running Windows Server 2019 and it is mostly used for a Minecraft Server and 24/7 Plex. Now I have gotten an old DL380 G5 from school with one 74gb 10k hdd with probably more of those to come. If not, I'm going to replace it with one 248gb kingston ssd. It has 32gb ram and two Xeon 5160's. My plan is to use the HP server to run the services like the Minecraft server and Plex. The FXServer is then only going to be used for network storage as the case makes for easy expansion. Now I need some opinions and help, is this a good setup? If not, should I ditch the HP server or keep both? Something else? If I'm going to use the HP server, it does not need RAID, I do not find it critical enough for that. I have been installing PROXMOX 4.4 and ESXI 5.5 on it, but I'm currently not satisfied. ESXI only works on 5.5, management/documentation is lackluster and confusing. As far as I know without a working web interface. PROXMOX is working better, but even though I've installed the VIRTIO drivers for everything and configured everything according to best practices. The network is slow with a max of 40 mb/s when I do a speedtest while the baremetal FXServer is getting 500 mb/s, the full speed. So if I can't fix this, PROXMOX is out of the picture. If I am going to use this server, any tips? Just do it baremetal? Disregarding how I'm going to use the FXServer, I would like to make a raid array with future expansion opportunities and maybe some redundancy. I would want this for the possible speed increase and so I have one volume for all my movies,backups,series etc. So when I have filled up my 8tb drive, I would like to have the possibility of easy expansion. I have an extra 1tb hdd and 2tb hdd, and if I could, I would like to use these as well. But I am under the impression you need hdd's with similar sizes, so most likely more 8tb hdd's, these are shucked from easystores. Can someone elaborate on this? Should I do software raid? Hardware raid? For possible buy advice, I live in the Netherlands. Need more information? Please ask. Any help is most appreciated.
-
pci passthrough PCI-Passthrough made easy for: Proxmox VE 6.0-6.2
Bendy_BSD posted a topic in Programming
It's finally here and tested (for now) I finally finished all the tools and I gave credit where credit's due. So, please enjoy and so far it didn't break my system so it should work just fine. here, use this script to unzip the tools for you (assuming that you downloaded it to your home folder or in the terminal type in cd $HOME, press enter and then type in pwd and press enter and it should give your home directory. so, in this case cd to your folder that you downloaded the tools to and type into terminal: mv PROXPCIPASSTOOLS.zip $HOME/ then press enter. Next, type in unzip PROXPCIPASSTOOLS.zip && rm PROXPCIPASSTOOLS.zip and press enter. cd into PROXPCIPASSTOOLS/ and type in ./install.sh and press enter and just leave the keyboard alone and it will automate the default setup for you. This also includes configuring: 1.) grub 2.) modules 3.) pci ven#:dev# 4.) blacklist modules conf. 5.) backups of necessary files. so, enjoy. PROXPCIPASSTOOLS.zip-
- proxmox ve
- bash script
-
(and 3 more)
Tagged with: