Jump to content

Livin

Member
  • Posts

    216
  • Joined

  • Last visited

Awards

Profile Information

  • Gender
    Male
  • Location
    Abbotsford, BC, Canada

System

  • CPU
    Intel Core i7-13700k
  • Motherboard
    Asus ROG Strix Z690-E Gaming
  • RAM
    32GB (2x16GB) G.Skill Trident Z RGB 5600Mhz CL36
  • GPU
    Sapphire RX 7900 XTX Nitro
  • Case
    Phanteks p600s
  • Storage
    WD 2TB SN850X Black, HP ex900 512GB SSD, Seagate Ironwolf 3TB
  • PSU
    Corsair RM1000x
  • Display(s)
    Alienware AW3423DWF, 2x Samsung C24F390
  • Cooling
    Arctic Liquid Freezer II 420
  • Keyboard
    Corsair K100
  • Mouse
    Glorious Model D Wireless
  • Sound
    Creative SoundBlaster Zx
  • Operating System
    Windows 11 Pro

Recent Profile Visitors

1,534 profile views
  1. Huh....totally missed that this event was happening and have been folding the whole time since the end of the last event, lol. In it for the push to 500M for me and seeing the LTT team hit #3
  2. There are few ways you could accomplish it, but in your particular scenario none are really very practical. There is something called "teaming" that you can do with network cards to effectively run multiple adapters together as a a single connection, but this also requires the other side (in your case, the apartment networking) to also be aware of this (either through manual configuration or other protocols such as Link Aggregation Control Protocol or LACP). There are also other ways to "bond" multiple connections together as well through software or hardware but it comes with a lot of challenges in routing traffic. Depending on the solution as well, you may not be able to use it as one big pipe but only as multiple small pipes (5 x 5Mbps wouldn't give you 25Mbps for one service, but you would have access for 5 different services to use 5Mbps). Having no control over the other side very much limits your options here as well. This is also assuming each port you have access to has its own 5Mbps allotment and isn't capped across multiple ports as well. So yes, it is something that is possible from a technical standpoint but not really feasible to accomplish what you would be looking for in your situation.
  3. I don't know about your monitor in specific but if it is an OLED they will generally have auto dimming enabled to as a burn in risk reduction. There may be a way to hack it out but generally it's something you'll want to leave (as annoying as it can be at times).
  4. Got my 7900 XTX back from RMA last week and threw the other cards in to a spare machine to run for the rest of the event. Pushing for the top 100
  5. This is exactly what I'm seeing. It's (sort of) good to know I'm not the only one.
  6. Yeah, let it run for the day yesterday and the utilization varied wildly depending on the WU. It's a shame.,the most I'm seeing is around 85% utilization and average realistically around 40%. The 3060 Ti I was using before had no problems averaging 80%+ over the course of a day. Well, I have another system I can get the 3060 Ti re-setup in for the remainder of the event so I'll probably do that. Disappointed the 7900 XTX couldn't help more
  7. I know the threads older, but thought I'd post a quick update/note surrounding the conversation on hot spot. Through various events, I ended up getting credit for my 7900 XTX Taichi and have swapped it for a Sapphire 7900 XTX Nitro. Hot spot on the Nitro is waaaaaaaaaay better. Now I'm getting a 10 - 13 degree delta between edge temp and hot spot with a max hot spot temp of 84C. Don't know what was happening specifically with the Taichi card, but for a similar size cooler the Nitro is performing much closer to what my expectations were. Keep in mind, I did disassemble the Taichi and repaste, still ending up with a 30 degree delta....
  8. Very strange, I get home from work today and it's actually working harder. Average is still pretty tanked, but current looks much more like normal. Yup, 13700k idling along when both screenshots were taken. Nope, 16x link speed and as above, system is otherwise idle during both screenshots. very confused, I guess we'll just see how it goes.
  9. I just got my 7900 XTX replaced and back in my rig and wanted to use it for the remainder of the folding month. Everything testing in benchmarks and games seems to be working well, but F@H seems to only be using the card at ~60% and power is sitting between 100w - 150w (benchmarks will typically push it above 400w). I've had it running overnight to see if anything changes, but still seems to be the same with production of around 1.5M ppd (less than half the expected output of the card). Anyone have any ideas on what I can do? Is it just a matter of what projects I'm getting assigned?
  10. Back again with some updates... ASRock didn't end up figuring anything out and I was able to get an adjusted refund from the vendor that it was purchased from (pricing on it had dropped since the card was first purchased and I was ok with the arrangement). Since I knew exactly what I was looking for in a potential issue, I cleared with the vendor purchasing a completely different 7900 XTX (went for a Sapphire 7900 XTX Nitro) with the condition that it could be returned if running in to the same issue. Short answer is yes, it did run in to the same issue. I really wanted to get somewhere further on this though, so I started playing around with settings more again. One thing that changed, either between the cards or the driver versions that were installed, was I had much more luck tweaking custom resolutions with this new card. Previously the AMD panel wouldn't let me save them saying I needed to "clear all errors" even though there were none . I found that the flickering (again, only with no load, HDR on, and 165hz) seemed to be affected most when playing with the G.Pixel Clock (kHz) in the AMD panel custom resolutions. Doing some more discovery, I found that this clock directly affected the refresh rate that was available, dependent also on the values entered in the "Timing Total" fields. I tried a number of custom resolutions using some of the timing and g.pixel clock settings from this page's pixel clock calculator: https://www.monitortests.com/pixelclock.php?width=3440&height=1440&refresh=165&decimals=2&minhblank=56&maxhblank=560&hmultiple=8&minvblank=5&maxvblank=90&vmultiple=1&maxpclock=165 I did get some success when using a reduced timing total so I could hit the higher refresh rates with a lower g.pixel clock, but one of the side effects I noticed was the card was locking memory frequency to full speed and eating around 100w at idle. Not ideal. After a bit more playing, I noticed the "Timing Standard" drop down which gave a number of automated options for timing control rather than the manual tweaking I was doing. Reading up on this Nvidia page about timing standards, I figured I'd give CVT and CVT-Reduced Blanking a try. CVT was a no go (still encountered lots of flickering), but switching to CVT-Reduced Blanking appears to be working! There is still some increased power draw at idle, pulling 60w instead of the 20w - 40w it was bouncing around between before, but I'll take it! Memory is not clocking all the way down (likely the cause for the increased draw) but is coming down to 909Mhz at least. In case some poor soul is running in to the same issue with a 7900 XTX and the AW3423DWF strangely flickering at idle, try a custom resolution. Here is the settings I have currently working for me.
  11. My card's on RMA right now for other reasons but will definitely be checking that out after it's back. 16 degree delta on hot spot is way more reasonable than the 30 degrees I was seeing.
  12. It's a bit on the high side but throttle point should be 110C (where it will start down clocking). I have a 7900 XTX Taichi (so beefier cooler) and it is generally sitting between 90 - 100C with 70% fan speed in Diablo 4 when framerate was uncapped for reference. There are others who have been getting temps better than mine as well, but even after a re-paste that's where it ended up. EDIT - Saw the concern about comparison between edge and hot spot temps. With my 90 - 100C it never got about 70C. I had a 3080 previously and was concerned with the large hot spot delta as well. Still not super happy with a 30 degree delta there, but after going through the effort of repasting myself I couldn't see it getting any better
  13. Been running as an experiment for the last week or so. This is mixed with gaming and some on/off time on a 3060 Ti (includes the attached monitors for the station). Put the cost in as $0.13/kWh but should really come out to closer to $0.12/kWh (it's a blended rate based on usage over 60 days)
  14. In rainbow puke startup mode.... Running with my main rig. The 7900 XTX is out on RMA right now but hoping to have it back in for the event. I have an RX 560 in my media PC that I'll throw on as well and probably throw in a 3060 Ti for some extra (currently running as my backup in the main rig)
  15. Just an update on this for track record sake in case someone comes along this in a google search later down the road. Sending the card in for RMA since it has a noisy fan as well now. Will post another update once it's back. Currently running on a 3060 Ti and everything with the display is fine, so the 7900 XTX card I have (had) was definitely a factor.
×