Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

AMD Lover

Member
  • Content Count

    3,149
  • Joined

  • Last visited

Posts posted by AMD Lover


  1. Little update, I had installed a GTX 1650 however with vSphere even though you can pass the PCI device through to the VM because its consumer grade hardware its not detected by the OS. Picked me up a Quadro P2000 and tested it out, hardware decode on my plex server now works exactly like it should. You can see in the pink highlighter on the 2nd picture that if Plex cannot "Direct Stream" the content it will switch to transcode and if it can use hardware over CPU it will work like below.

     

    image.png.60720ad29abb5632688a14ad3a5d1f9e.png

     

     

    image.png.05b66f911e89b15ca6ff7b3a1322492f.png

     

     

    spacer.png

     

     

    spacer.png


  2. 2 hours ago, RollinLower said:

    my goodness, that rack and that switch are pornographic! 

    what do you use the servers in there for?

    Mostly lab environments, VM's for server or network testing. Learning how to configure various things. One of them has GNS on it, the others are running vSphere. Included some updated pictures, those others in my last post were from a long time ago.

     

    spacer.png

     

    spacer.png

     

    spacer.png


  3. On 12/12/2019 at 10:56 AM, realDX said:

    Greetings Amd Lover and sorry for bothering, I have the same motherboard and was wondering if you succeeded in raiding 0 the two nvme drives.. sorry for the bump of this old thread! 

    I ended up just using both drives individually. I believe you could still put them in a RAID using Intel Rapid Storage. Basically its software RAID enabled by the BIOS. So you would have to enable it in BIOS and then I don't know if you would need a separate boot drive or if you could install Windows on one of the SSDs and then enable RAID. 

     

    This is the direction I would look however if you're trying to RAID on this motherboard. I don't believe it supports hardware RAID.

     

     

     

    Thanks


  4. Just now, GenericFanboy said:

    Try more apps from the app store, if they all lag then the problem is with the app store. the easy solution for that would be; dont use the app store. But if you feel the need to use the app store anyway for spotify and stuff; Idk lol

     

    Just now, SnowWolf370 said:

    Stop using apps on your computer and use programs instead.

     

    Solves 99% of issues these days.

     

    Could it be using G-Sync in windowed mode? I just installed a graphics driver update and did a clean install to wipe all the custom settings I had in case something in there was messing with it and it fixed it.... I will try to replicate after work by turning windowed G-Sync back on.


  5. So after taking my computer apart and putting it back together I've noticed that my mouse lags when I open certain windows like Spotify and Messenger (Which both came from app store). Once I minimize those windows I have no issues over chrome and other actual windows applications. Any ideas? It is a wireless mouse but signal wouldn't make sense its only over the same windows repeatedly.

     


  6. So I recently removed the stock cooler on my GPU and reapplied new thermal paste to everything. I am wanting to check the VRAM temps somehow and make sure I got good paste application on those. Is this possible? I don't see anything in GPU-Z.

     

     

    Thanks


  7. UPDATE:

     

    Finally getting an update up, it's been awhile. This has been kind of a discontinuous build, I got the desktop built and didn't have a monitor for 3 months. I was using my TV as my monitor, holding out for a ultrawide monitor and seriously hindering my use of the desktop. I finally got me a 34" ultrawide and then took it to work to use as a folding rig during folding month.

     

    Now that folding is over I took the time to take the computer apart, clean out all the dust, upgrade the CPU fan and swap in my sleeved cables. I also took the opportunity to replace all the thermal paste and also take apart and repaste the GPU after the folding. I'm sure 70C 24hrs a day for 34 days isn't the best on thermal paste. While cleaning the motherboard one of the letters of the graphics on the Nvme heatsink came off so I decided to roll with it and debadge it, I really like how it looks after the fact.

     

    I'm still wanting to get the AIO GPU cooler posted somewhere on the first page of this log. The noise isn't actually that bad when gaming, I don't find it annoying with other ambient noise in the room but it could always be more quiet. Overall CPU stays cool never getting above 70-80C and the GPU runs around 70C.

     

    Any ideas on moving forward is always appreciated!!

     

    Photos taken on Pixel 4XL

     

    spacer.png

     

    Old fan left, new fan on right

    y4mbOHhR6SUFpuUlSQy93Aw1gZw0OBLaBuLa_K4I

     

    y4mc2sCuq38wp5zamXef2UVBx2WGEhx2d4X3ZdbS

     

    y4mpnVANrKObtY-Ik-lYEjiKl_Gdop1-9NiNHK-l

     

    y4m7hgjoPixwU2aD--Lnge6TKY1aK4GkVMIMjuD-

     

    y4mPFY9jIv03tNwbpjIqPZ3fr0-BWgcHyn408Bkh

     

    y4mxDWFERU7Mr_iL6Ng7sHq0edFm8fgl6f5DHPZe

     

    y4mzODcPvL4noCP2r-w6O9MFq9FwBnykEG6tR_W0

     

    y4mEfy-XSeXPeFyq-My2ztLNBy05T30x5hObOLdl

     

    y4m6E9UNa1ZHvHAfD1abovLmFmWDo87XQd4Jb4fh

     

    y4mB59UG3L4RIYui9YR1q0bG_RCpxPuMnEueeTw3

     

    y4mtq219bEvNEIvtbsY3hkNT4zjJ8Krxa-hR1wFM

     

     


  8. This isn't even close to how I would like it to look but its better and a continued work in progress. Post some pictures of your horror stories below, no random pictures off Google post stuff you've seen first hand or worked on. Share your tips on cleaning up that rats nest of cables. Post any cable porn you might have done or came across as well.

     

    spacer.png

     

    spacer.png


  9. If I have a server with 4 network adapters and I want to team those interfaces would it be better to create two separate teams of (2) NIC's each? One team (2 NICs) for the host and one team (2 NICs) for the virtual machines or create a large single team of (4 NICs) and "share" the vNIC?

     

    The reason I am debating this is because if you have your virtual machines sending data and you're also doing replication of your VMs (Host Traffic) in theory it seems like it would be better to split this traffic however I may be wrong in my thinking. I'm hoping someone else can give me some insight. I also believe that software teaming in Windows doesn't actually give you a aggregated (4Gbps) of throughput. Instead you would have to configure something like LACP with Etherchannel?

     

    Just looking for someone with some more experience in this.

     

     

    Thanks


  10. So I decided to run a folding client on my gaming computer temporarily. I have a server that is my main host but I wanted to pull some more points in so I started folding on my gaming rig, GPU only. 

     

    GPU is running around 65 C to 70 C. The question is, will running a GPU at 100% like this for let's say a month have any negative effects? Thermal paste, stability, life? 

     

    Gigabyte RTX 2080 Windforce

    700w Platinum PSU

    i7 9700K


  11. Just now, Damascus said:

    No, no, no, please no.  Don't comment on things you have subzero understanding of.  1600rpm is basically silent on the B9 redux, because it basically doesn't move any air

    The nf-B9 redux is, absolutely not the fan for this use case.  The flat, high angle blades and low rpm make aesthetically pleasing and quiet fan an atrocious option for any usecase with a level of restriction.  If the nf-a9's colors are a big stopping block, I recomend either of these guys as an excellent stand-in of equal quality.  Noiseblocker isn't as well known as noctua or Be quiet!, but it stands toe to toe with any fan on the market.  

     

    https://www.blacknoise.com/site/en/products/noiseblocker-it-fans/nb-blacksilent-pro-series/92x92x25mm.php?lang=EN

     

    https://www.bequiet.com/en/casefans/312

    I actually remember those fans from back when frozencpu.com was the shit ??

     

    Just looked up spec sheet and the 92mm EB has horrible SP (0.872 mm)


  12. So I was looking this this NF-B9 redux-1600 PWM compared to this NF-A9 PWM. The Redux will better match my build but the static pressure of the redux is 1.61mm compared to the NF-A which is 2.28mm. This seems like a large difference and the fan will be pulling through a CPU cooler and a side panel the CPU cooler is pushed mm away from.

     

    Is the static pressure going to make a difference? Couple degrees?

     

     

    Thanks!!!


  13. 2 hours ago, Mr.Humble said:

    Looking good! I'm just wondering why the move to RAID10 for the SSDs - are you using them to cache the array?

     

    Also why did you choose the Ultrastar drives?

     

    Do you use the machine just for Plex or does it do some other stuff?

    I was going to move to a RAID 10 on the SSDs just because l plan on adding two more so I'll have four of them. I thought about RAID 5 but 10 will give me better performance all the way around. I'm using the SSD array for VM vmdks so the VM and it's storage runs off the SSDs. Plex and other bulk storage is on the HDDs . 

     

    Choose the Ultrastar drives because they have better performance than a NAS drive but they are somewhat cheap compared to other datacenter drives. 

     

    It gets use for other stuff as well. I'll be going back with VMware Vsphere as a hypervisor. I'll have a 2019 VM for Active Directory, DNS, DHCP. I also at one point had a pi-hole VM and might spin up a software firewall at some point. 


  14. Hey guys, this is gonna be a small build about upgrades to my home server. I had ran out of space for Plex storage since I was running just a small array of random 500GB used HDDs. I'll list my current specs and estimated specs down below, changes in BOLD. Feel free to ask some questions or make some recommendations!

     

    I changed the stock cooler out for the Noctua a little while back. I will also be adding a RAID Expander to accommodate the extra drives.

     

    I currently have only bought 4 of the 6 8TB Ultrastar Drives so I still have two more to purchase.

     

    Current Specs: 

    CPU: Intel Xeon E5-2690 | GPU: NONE | Motherboard: SUPERMICRO X9SRL-F  | RAM: 64GB (8x8GB) Micron VLP DDR3-1600 ECC | PSU: SUPERMICRO 665W 80 PLUS Bronze | STORAGE: 2x Samsung 860 EVO 500GB (RAID 1) & RANDOM DRIVES (RAID 5) | COOLER: Noctua NH-U12DXi4 with 2x Noctua NF-F12 iPPC 3000  | CASE: SUPERMICRO CSE-842TQ-665B 4U | OS: vSphere 6.7 U2 | 

     

    Planned Specs:

    CPU: Intel Xeon E5-2690 | GPU: Quadro for Plex decoding | Motherboard: SUPERMICRO X9SRL-F  | RAM: 64GB (8x8GB) Micron VLP DDR3-1600 ECC | PSU: Supermicro 500W Multi-Output Redundant Power Supply (PWS-503R-PQ) | STORAGE: 4x Samsung 860 EVO 500GB (RAID 10) & 6x 8TB WD Ultrastar Datacenter (RAID 10) | COOLER: Noctua NH-U12DXi4 with 2x Noctua NF-F12 iPPC 3000 | CASE: SUPERMICRO CSE-842TQ-665B 4U | OS: vSphere 6.7 U2 | 

     

     

     

    y4mbJzOq5MSNpQ6GqNdgKmOHK2NCbXQyo-EC4Kts

     

    y4mpcRyokpCmLe4NXjgQ9h9hoFz3rfCTqe_FaieO

     

    y4m6x_t_s4AaxRuIoWf2i3xwlR_JC6lsdrlsirvQy4mDPUGBtuBPb_x_TUSckFa6USQKOBXJ1zwkFiAB

     

×