Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About unf0rg0tt3n

  • Title
  • Birthday Apr 17, 1993

Profile Information

  • Location
  • Gender
  • Occupation
    IT Specialist


  • CPU
    2x - Xeon E5-2640
  • RAM
    256 GB DDR3-ECC
  • GPU
    AMD firepro 7900, Nvidia Quadro600
  • Storage
    250GB Kingston SSD, 2x - 500GB WD black 7200 RPM
  • PSU
  • Display(s)
    DELL P2417H
  • Operating System
    W10, Debian, Ubuntu

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi Guys, The problem I'd like to discuss with you is something I asked on the truenas forums. But over there the responsiveness isn't that great. Also the matters I have discussed here arre mostly resolved. I am running truenas 12.0 stable in a VM, passed through HBA and it's workign great. I also have pool which isn't passed through with HBA; so I did migrate all files over to the good pool. That part is done and okay. Now I wanted to get my jails over to the new pool. - iocage export <jail name> - copied all jails from imag
  2. Thanks guys! Unfortunately it isn't possible but I still appreciate the explanations.
  3. Ah thats too bad! Is there any other solution that you know of? Those fans tend to get real loud sometimes
  4. Hi guys, I have a server chassis with 3 hotswap fans. Those fans are 4 pin pwm fans, which slides on a backplane that has a pwm header. That backplane gets powered by molex. So I am wondering if there is a female molex to pwm fan adapter? I like to keep the fans a little more low on noise sometimes I did find another post somewhere on the ltt forums that was referring to molex fans. But this isn't the same. Thanks!
  5. Heya guys, Hope everyone is still safe during this strange times! I seek guidance, can someone please help me? Currently I have running a proxmox server with dual xeon e5-2680v2, 256Gb ram and plenty of storage (plenty for me). I also have freenas running on that server and currently the disks are passed through directly by using “disk by-id” The disks are all connected to a HBA which isn’t something manageable like in enterprise servers. Everything is working well, but how do I change the disk by id to just passing through the PCI-e device? Because when I try
  6. Thanks! What are the consequences for not being able to expand a raidz vdev?
  7. Hey guys, I have a freenas server with 2 pools. 1) 4-1TB drives raidz1 2) 2 normal striped for meh data. So pool one leaves me with 2.55TB of usable storage, its not enough and I Need more storage. I was thinking, will the following work: Replace 1 out of 4 drives with a 2TB drive, resilver then the second the same way... and so on. Until all 4 drives are swapped with 2TB. I know 2TB is small but these are 2.5inch drives with a maximum clearance on 10mm. So it kinds is blegh Will it work though? And is it possible to get rid of pool 2 a
  8. I completely forgot about this topic. Got it to work after I disabled S.M.A.R.T. inside of freenas. Freenas tries to check on the disk which are passed through, that means that the low level functionality still is in use by proxmox. Freenas can't grab the values because it's in use and freezes. Disabling the SMART service inside the freenas services panel was my ticket to get this working. No more freezes anymore *knocks on wood*
  9. I may be closer to the solution than this morning. To check that I did the following: - Disabled S.M.A.R.T - Disabled Rsync - Disabled Nextcloud So all day, no crash... I enabled Nextcloud an hour ago and it's still running. When I see the error image I posted it may have something to do with the S.M.A.R.T. because it's hanging on drive checkups. I know for instance my drives have no timeout spin down or something like that, so it actually may be S.M.A.R.T. Will test later by starting that particular service and who knows
  10. I'm passing through the disks. The disk timeout is disabled, and write speeds are at 1.6GB per second.
  11. It does nothing until I kill it. Novnc, ssh.. nothing works. I had 11.3-RC2 RC3 and now I'm running the stable release (since yesterday) but it still is crashing. Logs really don't say a thing it's just a blank gap when the freeze first occurs
  12. I have full passthrough with the disk by id. Freenas has full control. Didnt think about the zfs on host thing but it might because I wanted freenas to have control over the drives and not proxmox. Also I cant make mistakes when doing stuff with storage on my host.
  13. Hi guys, I am running freenas for a several weeks now and I love it. Unfortunately FreeNAS-11.3 is making my life miserable by having 100% cpu utilization once a while. I only have one jail (nextcloud) but when 100% util happens (when I am not using my system at all) it totally freezes and isn't responsive. fortunately it's running on a hypervisor and I at least am able to kill the VM. The console output while the system froze was: I disabled S.M.A.R.T, checking if that will work, also proxmox has that function already my server setup: Proxmox hypervisor
  14. I have used a Quadro 600 card but that didn't work out. according to nvidia you need a subscription to enable the special function. AMD worked instantly and Nvidia GTX 750TI only worked inside a linux VM.