Search the Community
Showing results for tags 'gpu'.
-
I'm stuck on which GPU after returning to build my 1st PC since Xbox 360 lol. My spec and parts so far are: Gigabyte X570s UD Ryzen 9 5900x Corsair Vengeance LPX 64GB DDR4 3600MHz CL18 Crucial P3 Plus 2TB M.2 SSD Phanteks PH-P1000PS Revolt X 1000W ATX -- the reason for this PSU is for building a server later on a mini-itx--- Games I enjoy for fun not competitive but also for some 3D work such as doll creation for my daughter to 3D print off a Neptune, so blender or fusion 360 -still looking at which 2 will be best for a starter COD pUBG Anno 1800 Warhammer 40k total war Space marines 2 when released City skyline2 Ms flight sim 2 Cyberpunk 2077 Tomb raider Assassin's creed Resident evil Alan wake 2 To name just a few Help on which GPU to take on a budget of no more than £800 would also look to purchase 2nd if there is no issue with that option. Read about FSR. DLSS. Ray tracing etc. And how these can effect game play and graphics on AMD and Navida ... Just stuck on which direct to go. Budget (including currency): Less than £800 Country: UK Games, programs or workloads that it will be used for: gaming and some 3D model making with daughter for 3D printing fun Other details. See above
-
The other day i bought a used PowerColor Red Devil RX 6800 XT. When playing games, especially at high frame rates and in game menus i hear a coil wine coming from my GPU and it's really annoying. I tried lowering the voltage to 1060mV and capping the frame rate to 140fps (bellow my monitor native refresh rate) and the noise does get lower but i can still hear it to the point of it annoying me just enough. The only culprit for this problem can be my PSU which is Chieftec ECO 700W GPE-700S https://www.chieftec.eu/products-detail/118/ECO_SERIES/122/GPE-700S It's an European PSU that focuses on power efficiency and saving energy and it's a brand that hardly any Americans have even heard of. I was thinking of buying a new more expensive PSU like the Corsair RM850e, which is not only Fully-Modular, but also has two different PCI-E cables unlike mine which only came with one. I've heard people saying that plugging two (6+2) pins from two different PCI-E cables does help with lowering coil whine noise. Also on the PowerColor website it does say that the "Minimum Recommended PSU" for their RX 6800 XT is 850W. So it's maybe mine fault for not talking that seriously. So, what are your thoughts? I don't want to end up buying a 100$ more expensive PSU just to have the same coil whine noise again. That's why i need yours reassurance! The rest of my specs: CPU: Ryzen 5 5600 MB: MSI B450 Gaming Plus Max RAM: HyperX Predator 16GB (2x8GB) DDR4 3200 PSU: Chieftec ECO 700W GPE-700S
-
I feel this might be an easy one. Are ok if PSU pcie/vga cables are touching the GPU? Right now Im switching to a 7800xt, since this is a bigger card the second cable is touching some parts of the gpu. Added pic. My current setup: - AMD 5700 X3D with MSI's kombo strike 2 enabled - MSI b450 tomahawk MAX motherboard - 2x16 Teamgroup Vulcan 3200mhz cl16 axmp enabled - 7800 XT XTX QICK319 speedster (just arrived) - PSU Gold EVGA 650w G5 supernova (I already made another topic about my PSU, I'm aware that is not the recommended). - 1 nvme m.2 970 Samsung; 1 SSD storage; 1 HDD storage. - 7 fans. Thx in advance!
-
Recently (within a week) my primary monitor (asus VG34VQL1B) Its connected via displayport to my rtx 3060 ti. It only happens on the primary monitor, and not the second. I have chnaged cpu, mbo, and ram. It did not change it.
-
Hi! I hope you guys are doing fine. This is my first post here on reddit, and I have a question regarding why my monitor turns black and freezes suddenly when I am using Premiere pro or After effects. I can say that I do some intermediate stuff in Premiere pro and After Effects. I also have some plugins when using these applications. I already tried/did these things below: -added more ram from 16gb to 32gb -replaced the stock cpu fans with a thermalright dual tower -reseated the gpu (1660 super) I used DDU a couple of times already and made sure that I was using the 1660 super and not the integrated graphics( AMD ryzen 5 5600g) -replaced the psu from msi550 to corsair cx750 took everything apart except the ram and cpu. -used DDU one more time and replaced the driver from game-ready to studio-ready. I tried benchmarking it using UniEngine heaven and ran it to extremes. screen didn't turn black or freeze. I am now lost or don't know what to ask google anymore since I almost tried everything a simple gamer could do. I also tried re-installing windows 10 at one point just to clear some stuff up. Can someone enlighten me as to why this is happening? pc specs: cpu: Ryzen 5 5600g (1 year and 4 months already repasted) mobo: msi B550M pro-vdh wifi am4 gpu: gtx 1660 super (7 months ago, bought it from a local shop, not sure if it was new because the cover seal is already open when i took a look at it and the seller said it was good as new and had new thermal pads and paste; it was running fine for the past 5 months, then the remaining 2 started experiencing the black screen and freezes.) PS: Corsair cx750 ram: Kingston fury 3200mhz 4sticks 32gb 8each Is storage important? 1 kingston nvme 256 GB (half full), 1ssd kingston 500gb as well and 1 hdd 2 TBWD green 8 fans and a secure ups 650va automatic Any other stuff I do is fine, like playing heavy-intensive games like RDR2, GTA V, and Dark Souls, and some fps games and RPGs are also fine. Premiere pro and After Effects are the main culprits. They're pirated though, but it didn't happen before just recently, 2 months ago, i think, when I started doing more graphic-intensive stuff in Pr and Ae. please help. Thankyou!
-
Hello i need help! my computer is a prebuilt msi nightblade mi3, With a I5-8400, 16 gb DDR4, 350 watt psu, and i have just changed gpu to a 3050 8gb from 1060 3gb, My games keeps crashing. I have tried Assasins creed valhalla it crashes on high settings but works well on medium. I have tried assasins creed odyssey it works well on high settings but not on very high, and i have tried warzone and its crashing when shaders are at like 50%, i have also tried overwatch 2 wont crash, also tried fortnite it also wont crash! I know the suggested PSU for 3050 are 550 watt, but my prebuilt is very small and i dont have many opitions, i am not very good at computers as well, my budget is very limited. Does it crash because of bad PSU or that my cpu is bottleneck, Cpu is at 100% when i play does Games! Thanks for help!
-
Help me choose, I play all sorts of games but mostly esports, then on the "heavier side" of games I play BF V 1080p high, Hunt the showdown High." Options are: 5700 xt 146Euros, 6600xt 200euros, 6650xt 285euros. Other specs are: 14400f, 32 ddr5 @ 6800mhz, ASRock B760M PG Riptide, 700w psu, m.2 ssd. Some helpful links down below: Asus Radeon RX 6650 XT ROG Strix OC V2 Gaming 8GB - graphics card - Multitronic AMD Radeon RX 5700 XT vs AMD Radeon RX 6600 XT vs Asus ROG Strix Radeon RX 6650 XT OC Edition | Graphics card comparison (versus.com) RX 6650 XT vs RX 5700 XT [6-Benchmark Showdown] (technical.city) RX 6650 XT vs RX 6600 XT [6-Benchmark Showdown] (technical.city)
-
So I have two options, I could get a new 7800 XT new, which is a quite recent card, or I can get an used 4070 TI for the same price of 550 $. It is this specific card: AORUS GeForce RTX 4070 Ti ELITE 12G What is your opinion? Pros & Cons? I know my components quite well, but I am not so sure about the graphics card.
-
Came across this, wtf is this thing? Seems like a scam. Looks like AMD Radeon Instinct MI50! Help me out here, nani? AMD Radeon JIESHUO VII 16GB dedicated mining graphics GPU 4096-bit GDDR6X PCI-E 3.0 mi50 16g KAS and other computer desktop - AliExpress
- 3 replies
-
- aliexpress
- gpu
-
(and 2 more)
Tagged with:
-
Im about to buy a pc build with a ROG STRIX 1080 only issue was that its broken where the gpu is attached to the case therefore making the gpu case vibrate. Should I buy it since for the price, it seems like a steal. Will this affect performance? and any solution to how i can fix it ?
-
My GPU ramdomly disconnects from my system, it is very similar to just ripping it out while it is running. I have no idea why it is doing this, I have tried everything from different drivers to using Linux to running it a different system which ran fine in tell i put the gpu back inot my main system(i am going to run it in this secondary system some more). It started happening right after both a bios and windows update that happened at the same time (windows didnt give me a choice). I have a i7-7700, Rx 5600 XT, dell precision 3620 motherboard. I have tried downgrading the bios and it simply will not let me as the most recent update was a "critical" update. Anything is much appreciated as I dont want to spend money on something that isn't the problem.
-
I need some quick assistance with my Nvidia GTX 1660 Super GPU. I've had it for almost three years, and until recently, it ran at a cool 65-70 degrees Celsius during gaming sessions like Overwatch. I understand that over time, GPUs can experience some degree of heating. In fact, I've been vigilant about monitoring its temperature, even when it reached around 75 degrees Celsius about a year ago. However, out of nowhere, the temperature has shot up to a concerning 85 degrees Celsius without any changes to my setup or game settings. Any ideas on what might be causing this sudden spike in temperature? Your advice would be greatly appreciated!
-
Hi, I am building my first PC. I can start it with my CPU-graphics and installed windows 10 Pro. Everything works except that my GPU isn't detected in the device-manager and I can't get an image with the output of the GPU. I don't know, what I can do at this point, so I ask here (first time I ask for help in an online-forum). A lot of the things are bought used, but I don't think, that is the problem (the MoBo detects something in the PCIe slot). As I said, I can run windows with the monitor connected to the CPU-graphics. FYI I have internet and can install things (not a Nvidia driver, because no Nvidia GPU is detected), in case I need Apps, drivers, etc to fix the problem. The Fans of the GPU are spinning (different to other posts) and the lights are on. I am using a displayport-cable. I don't have another GPU to test it or another PC to test the GPU. Also no other monitor at the time (I am moving soon and I have my monitor there, not here). I don't even get a signal (not even for BIOS) when I plug the monitor into the GPU. In the BIOS (under Advanced - System Agent - PCI Express Configuration) I see x16 detected in the first slot, but that is about it. My build: (OS: Windows 10 Pro 64 bit) Motherboard: Asus Z490-e CPU: Intel i9-11900K GPU: Nvidia RTX 4070 12GB RAM: 2x16 32GB 3200 MHz kit (Corsair Vengeance Pro) PSU: be quiet! 12m 850W CPU-cooler: be quiet! Dark Rock Elite Storage: Fikwot FN955 4TB M.2 PCIe Gen4 Monitor is old, but works after I put the new NVMe into my laptop and installed the basic display-drivers (it is worse than full hd so windows had a problem with it). What I've tried already: I installed a BIOS-update (newest one, 2801 x64) with EZ-Flash and a USB stick. In the BIOS I tried: I checked all the cables, turned it off and back on (while I tried my luck in the BIOS I had to reset it and update it again a few times). enabling iGPU Multi-Monitor setting the primary display to auto or the GPU a different Power cable for the GPU (PCIe 5.0 (both sides) as well as a 2x8 to 5.0 splitter, and I made sure, they were correctly installed at the GPU and the PSU) putting the GPU into a different slot connecting the M.2 to the second slot (is still there) Did all windows updates, etc. uninstalled graphics-drivers reset BIOS I really appreciate any help. If you need more info or pictures, please say so and I will get them. Thank you!
-
Hey all LTT Forumers! I Wanted to make a post to sanity check myself on this... I Recently brought a RTX 3090 MSI Suprim X used for a good price, but I've been slightly worried about the temps, is 105c on the hotspot too high? I've left the power limit on stock, but i found limited it to 70% works well to keep the temps under control while losing about 5 to 10 FPS... I Purchased some K5 pro and Kryonaut Extreme just in case so I am ready to tackle a repasting job, but i know the MSI Suprim is a pain in the rear to take apart and put back together but since im out of warranty, is it worth it? The temps and power draw below is after 10 minutes in cities skyline 2 with 100k pop.
-
Hello LTT community, I just woke up and am about to choose violence. Ok, joking aside. I have a RX 6700 XT that was working flawless as of yesterday morning. I downloaded the most recent update from AMD's website. As soon as the program finished I rebooted. Everything went as expected, nothing wrong (so far). I watched a few YouTube videos and then headed over to Facebook. Then I heard a click (like the one my power strip makes when I turn on my computer) my TV just cut off. I glance over and my HDD light was still blinking, fans still spinning, Mobo light still on. My first reaction was my GPU just died. My trouble shooting begins here... I reset my PC the BIOS dots start spinning and the second the OS starts the screen turns off. I was able to get into recovery mode. I used a recovery save I had, and it worked. However no GPU driver is installed. Only the Microsoft driver seems to work. I tried to go back to the previous driver I had used but it to has failed. At this point I had given up. I am currently using my GTX 1060 3GB, glad I kept it. Does anyone have a similar issue with the RX 6700 XT? My PC specs: Windows 11 23H2 AMD Ryzen 5 3600 AMD RX 6700 XT (GTX 1060 3GB as backup) Gigabyte AX370 Gaming 3 (Using the latest BIOS) 650w PSU 4TB SSD SATA
- 2 replies
-
- rx 6700 xt
- gpu
-
(and 1 more)
Tagged with:
-
I have a Gigabyte Vision 3080Ti that I purchased about 6 months ago from a guy that bought it during the 2020 release, he gamed on it till he upgraded to a 4090. I have noticed under heavy gaming load it gets around 81-83°C after a few hours it even spikes to 85°C.. is it time to redo the thermal paste and pads for this card? Or is this around normal temps? PS I have a 5000d Corsair airflow case. Front mounted EK Nucleus 360mm AIO with 3 addition lian li uni fans sandwiched, and 4 lian li uni’s exhausting
-
Hello everyone! I've had an issue of black squares randomly appearing on my screen at times for some time now. I don't remember when it started but I've had the issue for at least some months. Good to mention is that I had an outage in december leading to one of my SSD's in my stationary PC to break (showing RAW data) and perhaps possibly this outage could have damaged other components aswell without me knowing. The damaged SSD hasn't been connected to my PC since it got damaged because I haven't had time to look at solutions for it. However this post is about the black squares showing up on my screen randomly. I have all the latest Nvidia, Windows 10, motherboard bios and cpu chipset drivers installed. I've added a image to to demostrate an example of how the black squares could show up. When I move my mouse the squares disappears. I added the squares in paint since I can't screenshot them but they show as solid black ones like in the image provided. Setup: CPU Ryzen 5 5600 (non x version) Motherboard Gigabyte B550 Aorus Elite V2 RAM Corsair Vengeance LPX Black 16GB (2x8GB) / 3200MHz / DDR4 / CL16 / CMK16GX4M2B3200C16 GPU Gigabyte GTX 1660 Ti (GV-N166TWF2OC-6GD) Storage Kingston A2000 M.2 250GB, Kingston NV2 1TB PSU Seasonic Focus+ / 550W / 80+ Gold Operating System Windows 10
-
Like usual, I get bored and crawl the internet for new or goofy tech stuff, and AliExpress never disappoint me in this regards. An RTX A2000 with 8GB of VRAM? What in the hell? Very weird considering RTX A2000 officially only support either 6GB or 12GB config, 8GB configuration is pretty strange, especially how they manage to modify the VBIOS and the utmost important, memory bandwidth allocation, unlike some mods that usually double the capacity by using higher memory banks the likes of RTX 208Ti, 3060 and 3070/3070Ti thats some people has manage to done. What do you think about this another "re-salvaged component - Chinese style " abomination? Adding to my interest is a 6 Pin GPU power connector (see the last picture below), I believe either the Chinese do some "factory shunt mod" (RTX A2000 can be easily shunt-modded and widely practiced among Crypto miners) or just some reassurance to keep stable board power (and maybe they tweak the VBIOS too somehow). Pictures below is taken from AliExpress product page - https://www.aliexpress.com/item/1005005952320950.html
- 7 replies
-
- aliexpress
- rtx
-
(and 2 more)
Tagged with:
-
Hello everyone, so I recently got a SOYO RX 5700 XT from AliExpress. I knew beforehand it was a gamble, and kinda expected the card to underperform. What I did not expect is a detached substrate stiffener, I found out this morning when trying to replace the thermal paste. My question is can it be fixed / attached ? And How ? GPU works btw, runs a bit hot but works.
-
Let's take a look at one method we can use to measure the efficiency of a Graphics Card (GPU) at various Power Limits. With electricity costs soaring globally and the need to reduce heat emissions running Folding@Home can be a delicate balancing act between contributing to a worthwhile cause and keeping your Electricity Bill low and the Temperature in your home at a reasonable level. Modern GPUs, like CPUs, have a power-efficiency curve that is exponential. At the upper end of the curve you get diminishing increases in yields. So our goal is to find the most efficient Power-Level to run a GPU at. We can define Efficiency as the Yield (PPD) at a specific Power Level (W). For convenience we will use kPPD/W as the measurement of Efficiency. What you will need: Folding at Home Advanced Control Harlam's Folding Monitor (HfM.net) (Windows only or using Wine in Linux) nvidia-smi (Bundled with NVidia drivers on Windows and Linux) Excel or Google Sheets An hour or two per GPU. The best way of measuring efficiency in Folding@Home, given the variable yields in differing Work Units (WUs), is to run a GPU at a target Power-Level over a period of several days recording the Aggregate Yield of the GPU and dividing it by the Power-Level to obtain the Efficiency at that Power-Level then adjusting the Power-Level and repeating the measurements. However, a quick indication of a GPUs efficiency can be measured by observing the changes in Yield (PPD) during a single WU as the Power-Limit is adjusted. Frame Time (TPF) is the time required to complete 1/100th of a WU. In this example we will look at a EVGA RTX 2070 Super XC Hybrid (08G-P4-3178-KR) running project 18202 as the WU. First we need to configure HfM.net to calculate it's estimate of Yield (PPD) using the last 3 Frames as the Sampling Window. A larger Sampling Window might provide more accuracy but will take more time to measure. Select Preferences in the Edit Menu in HfM and choose "Last 3 Frames" to Calculate PPD based on and Click OK. Note that TPF appears to be calculated across all Frames so PPD will be a better measurement. Select a GPU to profile taking note of which Slot on which Host it is running. First we need to determine the Minimum and Maximum Power-Levels supported by the GPU. Open a Command Prompt (Windows) or a Terminal Window (Linux) and enter nvidia-smi -q to query the capabilities of the GPUs installed in the system: Power Readings Power Management : Supported Power Draw : 126.81 W Power Limit : 125.00 W Default Power Limit : 215.00 W Enforced Power Limit : 125.00 W Min Power Limit : 125.00 W Max Power Limit : 240.00 W where: Power Limit: Current value Power-Limit is set to Power Draw: Current Power consumed by the GPU Default Power Limit: Min Power Limit: Max Power Limit: Here we see this GPU has a minimum Power of 125W, a Maximum of 240W so we will want to measure the Yields between these two Limits. We will use 25W as the step size and record Yields at: 125, 150, 175, 200, 225 and 240 Watts. Next open the Folding@Home Advanced Control application from the Task Bar. Select the system with the GPU under test click on the "Log" tab to view the log checking the "Filter" option and selecting the appropriate "Slot" from the drop-down list.: Here we can see that this WU Checkpoints every two frames. We want a consistent sampling window with the same number of Checkpoints as the Checkpoint process adds a slight delay reducing the Yield. In this case we choose to record the Yield after an odd percentage has completed every 6th percentage as we want a sampling interval (6 frames) wider than that used for the Yield estimate (3 Frames) but with a consistent number of Checkpoints (3). It is important we measure the actual Power Draw rather than the set Power-Limit as at lower and upper bounds the GPU may have trouble enforcing the Power-Limit. Wait until the WU is 5-10% complete before starting measurements. In our Command Prompt (Windows) or Terminal (Linux) enter: nvidia-smi -i 0 -l 1 --format=csv,noheader --query-gpu=temperature.gpu,power.draw,clocks.gr,fan.speed which will query GPU 0 (-i 0) on this system and display the GPU temperature, Power Draw, Graphics Clock Speed and Fan Speed once a second. While the sampling window for the current set Power-Limit is in progress we will use this to estimate the Power Draw during the sampling window. In the above example with a 125W Power-Limit we see that the GPU appears to be averaging around the set value of 125W. Next we create a spreadsheet to record our values: The first Column is our "Set" Power-limit; the second our Observed Power-Draw; The third the Percentage measurement point; the fourth the TPF in Seconds from HfM; the fifth the Yield from HfM and the 6th the calculated Efficiency (E/B/1000) in kPPD/W. In a second Administrator Command Prompt (Windows) or Terminal (Linux) set the GPU starting with the lowest Power-limit at the end of a Frame. nvidia-smi -i <GPU#> -pl <Min. Power> In this instance I used: nvidia-smi -i 0 -pl 125 Watch the nvidia-smi window during the sampling interval and record the estimate of the Power-Draw. Populate the Command Prompt or Terminal with the next set-point in preparation for when the current sampling window ends. As soon as the current sampling period finishes (watch the Log in Advanced Control) change to the next set-point (nvidia-smi -i <X> -pl <Y>) and record the TPF and PPD estimate from HfM for the previous sampling window. It helps to record the TPF and PPD values a couple of times later in the sampling interval as they should be fairly stable after 3-5 frames have completed and it will give you a good estimate of the final values. As HfM calculates the Yield (PPD) based on the last 3 Frames and our sampling window is 6 Frames you do not have to be super accurate how soon after the Frame Completion you change to the next Set Point. Here are the final values. The values seemed inconsistent after the 175W Set-Point (completed 15:02) so I took measurements adjusting the Power-Limit down from the Maximum for comparison. Perhaps the calculations performed on the WU around this point got more complicated? Here is the smoothed (5-minute average values for PPD and Power) efficiency for this GPU over the initial test run from my Zabbix server for comparison. I then calculated the Average Efficiency over the two measurements for each of the Set-Points: We can then create a scatter graph of the data including a Trend line and display the Confidence or "Fit" of the Trend line (R^2 value): For this WU on this GPU we see the Efficiency is highest at the lowest Power-Limit and gets exponentially worse as the Power-Limit is increased. To put it another way, dropping from 225W, which is close to the 217W Default, to the Minimum 125W Limit we see a 7.53% decrease in PPD for a 44.4% decrease in Power.
- 16 replies
-
- folding
- performance
-
(and 4 more)
Tagged with:
-
Just made an account to ask this question. I recently upgraded my graphics card to a 4090, and it's been working like a dream. I even got a 12-pin GPU cable from CableMod. I heard about the recall on one of their other products, but since I have the cable itself (it replaces my PSU cables and cost me a fortune) I figured I was okay. When I first attempted to install the 4090, I realized it was a CHONK. I tried to give my cable enough breathing room to straighten out near the plug. The only part of the cable that bends is the cable itself, the plug itself doesn't seem to react at all when I put the glass on. You can see that it does curve down a bit, and the images are a little misleading since they're taken diagonally which makes it seem further out than it actually is. Image(8) actually shows the one spot it actually touches. Image(7) shows that all the pins are unmoved and still plugged in and there is no gap in between, I made sure it was all connected. I spoke with some people, and they said that my cable was pretty bent, and they recommended me get a vertical mount. I use a Lian Li O11 Dynamic XL and gave me this link for me to reference: UNIVERSAL 4- SLOTS VERTICAL GPU KIT (WITH GEN 4 RISER) – LIAN LI is a Leading Provider of PC Cases | Computer Cases (lian-li.com) Personally, I don't know how reliable vertical mounts can be, or how reliable the Lian Li mount is. I love their fans, they're very pretty, but I never put too much thought in a vertical mount. I have not seen any issues with my 4090 while its been like this, however I would rather get some more input before making a decision that would ultimately affect my entire computer. Like I said, I tried to give my GPU cables as much room as possible, and although it's bent a bit, the plug is secure and there is no gap that I can see (I have perfect vision btw). What should I do? Should I do anything at all?
-
I was reassembling some parts on a new cabinet and came across a different looking GPU 6 pin power input. According to its manual, this GTX 1060 should have the following pin patter: From Left to right and top to bottom: Top Row = D D Square. Bottom row = Square D D. However the atual hardware i have here is as follow: Anomalous Pattern = Top Row = Square, Square, D. Bottom Row = D, D, Square. My power supply follows the correct patter, the first one, the one user manual says it should be. The power supply 6 pin pci connector connects fine with the GPU, despite having a different pattern. The thing is, Im not sure if its just a manufacturing defect or the problem extends to circuitry level. I don't know how this GPU was mounted on the old cabinet, for when i took the job the parts were already disassembled. The previews IT guy moved to another company and i have no way to contact him. I don't know how to troubleshooting this, and im afraid of burn the GPU. I welcome any advice. Thanks in advance for the attention.
-
Hi, my games have been crashing loads lately while giving graphics errors. I have tried numerous things to fix the issue. My latest attempt to fixing the problem was a complete reinstall of windows on a different drive which did not help. This makes me think the problem is not software related. Could my GPU be dying? It has no artifacting problems just the crashes. My GPU is a Gigabyte RTX 2060 OC edition and I have had it for about 4 years. It currently reaches 77 C while at 99% usage at most but it used to be at 83 C at 99% usage all the time before I fixed that issue (for about 2 years). I had this problem in Valorant and Minecraft but it has not occurred since I set my max FPS to 160 and the GPU usage stays below 60%. I do still have the problem in Fortnite where the GPU usage is over 95%. The crash does not happen instantly it always happens about 15 minutes after starting the game and will then do it about every 3 minutes after the first crash when relaunching the game. The rest of my specs: CPU: Intel core I5-13600KF Motherboard: MSI mpg b760i edge wifi RAM: 2x Trident Z Royal 8gb DDR4 3200 mHz SSD: WD Black SN850X 2tb PSU: 500W from BE QUIET! Let me know if you have any idea what could cause the issue and how to resolve it. Thanks!
-
I've been looking for a used workstation laptop with MXM compatibility. i cant find a real list of older models with this capability, only new ones. any suggestions or examples?
- 1 reply
-
- mxm graphics
- laptop
-
(and 2 more)
Tagged with:
-
So i am planning to upgrade to a 7800 XT from a 1650 super. I am having a Ryzen 9 3900x, and a 700 watt 80+ bronze PSU (gigabyte B700H) am i good to go with the PSU??