Search the Community
Showing results for tags 'gddr6x'.
-
I have a PNY 4070 TI OC. I've seen this done on youtube to either upgrade or replace ram on the pcb so I know it's possible. My problem is my board is set up to take 16gb of 2gb chips. The last two are unpopulated as well as all the little resistors and capacitors around where the chip goes. I'm guessing micron doesn't make a 4gb chip because I thought the 40 series used the same boards across the series with different components on them and I could swap them all for 4gb chips to give me 24gbs. It appears I'm limited to 16. I've checked out what the 4090 board looks like and it has 12 spaces for 12 2gb chips. I think the 70 uses the same board as an 80 being that the layout for the ram could be 16gb. I can find 2gb micron chips. My problem thus far is, 1) How do I find out what the smaller components are? Also, 2) how do I know what bios to flash on the board to make sure all 16 will work. Even tho right now I can max out the settings at 4k and get a little over 100fps, as newer games come out, this won't last very long. Hence why I want to add some ram. I was stupid when I built this computer as I had the money to buy a 90, The price was already over 3 grand and I tried to sav4e a little money. I didn't think it thru and I kick myself every day for it as I do not have the $1600 to fully upgrade anymore. In a slightly unrelated issue, I was trying to update my firmware that has something to do with the rgb as it really sux compared to the rest of my rgb. Upon running pny's program to do so it reads my firmware version as 40.90.02 and closes the program. 3) So, is the gpu firmware the same as the gpu bios? If it is and I can pull this off I may not have to change the firmware/bios. Lastly, 4) Does anyone know what the small red 4 pin connector is on my board? I can't any mention of it anywhere in the documentation or on the internet and it's driving me nuts.
- 12 replies
-
- gpu upgrade
- gddr6x
-
(and 1 more)
Tagged with:
-
I just watched the video Anthony was in talking about GDDR7 RAM. This got me thinking. If it is so much better, would it be a smarter idea wait on a new PC entirely or to buy lower end GPU with a new PC and wait to by the next ideration of GPUs (like the 50 series or super 40 series if Nvidia goes that way) to upgrade? On the other hand it more than likely will be cheaper to buy a system now than wait for manufatures to make the next gen and unreasonably jack the prices again. Of the current GPUs on the market the 4070 really caught my eye for cost to performance while fitting nicely in to a budget. At this point the plan was to buy a new PC that would be with readon cost wise but still be powerful enough to last me at least 5 years *cough* (if all the parts I want can be at reasonable and instock at the same time.) *cough* In all seriousness I just found out I have Covid, so the coughs are kind of real which is funny in and of itself to me.
-
Original source - https://www.tomshardware.com/news/hwinfo64-adds-gddr6x-temp-monitoring-rtx30series Summary With a very recent update to HWInfo64, users are now able to see directly via the popular hardware monitoring tool the temperature of their GDDR6X memory modules for any RTX 3080/3090 cards which they own, and boy will they (likely) see a surprise waiting for them. Even with DLSS enabled, maximum temps of GDDR6X memory modules can reach as high as 100 C while gaming. And this is even worse if someone is attempting to use their RTX 3080/3090 for memory intensive workloads like cryptocurrency mining; the VRAM in those cases would shoot up to 110 C and severely down-clock itself to keep itself from being literally roasted to death, which makes sense as 110C is already (sort of) higher than the "95+C" operating temperature listed on Micron's official website for GDDR6X. Igor's Lab (the first afaik to bring up this "hot VRAM" issue back in September of 2020) estimates that the Micron chips themselves will likely have to hit 120 C before sustaining immediate permanent damage, but (also afaik) the general consensus among anyone who knows at least a little bit about computer hardware is that triple-digit positive (or negative too in many cases afaik) Celsius temperatures are typically never good for the lifespan of the hardware itself. Quotes Tom's hardware on GDDR6X temperatures while gaming - Same article detailing how this issue also affects various AIB-partner cards - And finally, quote from Igor's Lab regarding the absolute max temps before immediate permanent damage: My thoughts 1. Since I felt that this additional info wasn't necessarily part of the article summary, I'll include a link to a video posted within the past 24 hours by Classical Technology about this exact issue that demonstrates that this hot VRAM issue is essentially replicable for any VRAM-intensive workloads; that video also nicely includes some tips about how to solve this issue (and the pinned comment about whether or not it's a "GDDR6X-only" issue may or may not have been me lol): 2. I daily drive a HP Zbook 15 (1st Gen) with Ubuntu installed on it, and before I re-pasted the i7-4800MQ the (bloody) thing would thermal throttle at just 40% load so yea I can definitely feel the pain of whoever's facing these thermal issues. Yes it was definitely (at least mainly) a paste issue as thermal pumping had basically pushed most of the paste to the sides of the CPU (unfortunately I only learned about thermal pumping after re-pasting the CPU so I thought that it was just crappy thermal paste, which is also why I didn't bother taking a pic), and I am fairly confident that the heat-sink was still getting (at least somewhat) enough airflow as I did take a picture (see attached) of how clogged the heat-sink was before I cleaned it and did the re-pasting. I couldn't even get that blob of dust (on the right) out despite multiple attempts of using canned air; heck I didn't even know it existed until I cracked everything open, so yea either way a re-pasting was kinda necessary regardless of how much it was actually a dust issue to begin with. 3. Sounds like something that the new and improved chiller should be able to handle with a bit of custom engineering. Video idea anyone? 4. I honestly think that those of us who weren't able to get our hands on the (at least as of right now) unicorns which are the RTX 3080 and 3090 might've actually lucked out given that these are essentially hardware issues that cannot be fixed without putting severe power limits on the cards themselves, unless if Nvidia comes out with a driver which allows VRAM-specific power-limiting tuning, which doesn't seem to be the case at least as of right now. 4(b?). You know, the fact that this issue was already known at least several months ago during a September kinda reminds me of a little pandemic which also had signs that it could have began as early as September of a different year... Let's just hope that I didn't just jinx 2021 for computer hardware too lol. 5. Looks like @FaxedForward was onto something back in September 2020... 6. Oh I almost forgot... Fermi Sources (Original source at top) Micron's official GDDR6X website - https://www.micron.com/products/ultra-bandwidth-solutions/gddr6x Igor's lab article - https://www.igorslab.de/en/gddr6x-am-limit-ueber-100-grad-bei-der-geforce-rtx-3080-fe-im-chip-gemessen-2/
-
Hi! This probably has been asked and answered a few times but when I try to google it, all the answers are about mining, which I don't care about. So couple of days ago I took delivery of my new RTX3080ti FE. Ran TimeySpy benchmark with HWinfor on and core temp got to 90C and Memory Junction Temperature got to 105C. Even tho I knew those cards (and especially VRAM) run hot I started tweaking some settings. Changed fan curve, undervolted it to 1850mhz @ 850mV. Core was fine, rarely reaching 75C now, but VRAM, even when playing Cyberpunk was reaching slightly above 100C. I underclocked memory by -502 (as low as I can go in MSI afterburner) and now it usually peaks at 94C when playing Cyberpunk. Son now two questions: 1. Are my current temps ok? (I know that micron lists their GDDR6x at 110C but I don't really think it's safe) 2. Is there anything else I can do to lower the temps (especially VRAM) other then repadding? I not really comfortable to dissemble $1440 card during times where getting a replacement is impossible.
-
<Removed>
- 1 reply
-
- rtx3060
- gigabyte gpu
- (and 4 more)
-
3090 Ti has 24x GDDR6X VRAM chips, 384 bit bus with 21Gbps Memory - 1008GB/s of bandwidth 3080 Ti has 12x GDDR6X VRAM chips, 384 bit bus with 19Gbps Memory - 912GB/s of bandwidth Both 3090 Ti and 3080 Ti have the same 384 bit bus count Why did they decide to go such route? Very important question; are the current chips uncappable of having more than 1GB of VRAM per chip? Because if this is true then it is logical to go with 24x chips and split them by 16 bit bus each, especially because they are capable of transferring double the data rate of GDDR6. (basically instead of sending 1bit a the time, it can send 2bits of information at the same time.) So it would play out something similarly like with PCIe gens and NVMe. NVMe 3.0 can use 4GB on that gen and 4 lanes, or migrate to PCIe 4.0 and use only 2 lanes for the same performance, since the data rate doubles. My theory: If GDDR6X memory can carry only 1 GB per chip, like GDDR6 did, halving the bus to have higher capacity makes sense. Cos if GDDR6X could contain 2GB per chip wtih 32 bit bus, wouldn't it be logical for Nvidia to just put down 12 chips instead of 24 on 3090 Ti? But it really makes no sense to me right now because i am missing some information to connect and creat the full picture. - Perhaps by carrying 2GB they would need to double up the bus size? or have another chip that contains 2GB? or need to create new type of traces on the PCB? - Perhaps it might be easier to just put more chips down on the PCB, since they might already have traces, and they only need to add them, not recreate them? - Mabye GDDR6X indeed only contains 1GB per chip and GDDR6X can carry the same amount on 16 bit bus width as GDDR6 would on 32 bit bus width? - So, why did they use 384 bit bus then on 3080Ti? Maybe because they wanted high enough bandwidth, and 3090Ti would anyway have faster memory speeds and higher capacity by containing the same bit bus width. - Perhaps it is only a gimmick (didnt do much research on that, but does more VRAM mean better let's say rendering times or whatever, if the bandwidth is almost the same as before? (not including the performance gains on 3090ti vs 3080ti, strictly just bandwidth right now.) - Another idea is they didnt doubled up the bit bus size, because the tech does not exist in GDDR memory yet (64bit bus), or if they had an option and not used it, it would signal they could use it on the next gen? But because 4090 has still the same bit bus width, can we assume current GDDR generation just doesnt offer more than 32bits per bus width? Or because GDDR6X doubles the data rate, can we assume by using 16bit bus per memory chip, it is hypothetically "32bit" bus, but not really, it's still 16bit bus but with 2 bits of information being send out at the same time instead of 1 on 32bit bus? Like, I'd really love to know for sure what impacted the decision behind same Bit Bus width size on 3090ti while adding 2x more chips onto the board with GDDR6X memory in comparision to 3080Ti. I am right with any point I've made?
-
So I plan to replace my thermal pads and paste on my card, and I have a couple questions. So to preface, my card runs at around 70c max when running superposition, mem temps reached 94c. When I remove my side panel on my 570x the core drops to 66c and mem drops to 88c. This is with around 2100rpm on the fans with the panel off and 2200rpm with the panel on. I have 6 ll120s in my case at 1200rpm. So I figured id replace the paste and pads before swapping out my case and see what happens. I saw multiple reviews and on open air the card doesn't really go past 63c, and so, it being a used card im guessing the pads and paste have degraded. So I have a couple questions regarding the whole process. I am debating between Kryonaut and Syy 157 for the core not sure which to go with. What brand of pads should I get? What kind of thermal conductivity? Should I buy multiple packs of 1mm, 2mm, 3mm and cut them myself? What do I need to cut them with to be precise? Should I go with a predetermined thermal pad kit for my specific card? Is it worth adding an extra thermal pad on the right side of the die on the vram modules? I read that there should be one there from factory but there isnt. I just dont want it to prevent proper contact with the GPU die as I game so ideally both vram and core temps would come down. Does this void warranty? I am finding mixed answers online and it doesn't explicitly say in their warranty terms.
-
I has a few questions about gaming with gddr6 rtx 3070 and gddr6x on 3070ti if it's even useful if u have both 8gb memory for example but different type and speeds.. Gigabyte Rtx 3070 gaming OC rev 2.0 Vs EVGA GeForce RTX 3070 Ti FTW3 ULTRA If there is a like 100$ price difference if it is worth it or 3070 gigabyte is enough for ryzen 5 5600 maybe?
-
Summary If you frequently leave Discord open on your desktop and have recently noticed degraded game performance from your Nvidia GPU, a Discord glitch might be the culprit. Nvidia will release a fix soon, but also offers directions for manually rectifying the problem. Quotes My thoughts This is unfortunate but at least not a big deal. I’ve tested this with and RTX 4000 series but don't seem to be affected. My suspicion is because I use MSI afterburner with manual overclocks/undervolt. I did try 'resetting' values to default in MSI Afterburner but memory frequency still did not lower when opening Discord (joined voice channels, streamed my monitor etc). For those having the issue, most suggest using GeForce 3D Profile Manager and change the Discord profile. Would be interested to see if just installing MSI Afterburner also fixes the problem since I'm not affected. Sources https://www.tomshardware.com/news/discord-throttles-nvidia-gpu-memory-clock-speeds#xenforo-comments-3794740 https://www.techspot.com/news/97449-discord-might-throttling-nvidia-gpu-memory.html
-
Summary GDDR6X is real and runs at 19-21GB/s, developed by Micron. Also first official mention of the 3090 naming scheme. Micron: In Summer of 2020, Micron announced the next evolution of Ultra-Bandwidth Solutions in GDDR6X. Working closely with NVIDIA on their Ampere generation of graphics cards, Micron’s 8Gb GDDR6X will deliver up to 21Gb/s (data rate per pin) in 2020. At 21Gb/s, a graphic card with 12pcs of GDDR6X will be able to break the 1TB/s of system bandwidth barrier! Micron’s roadmap also highlights the potential for a 16Gb GDDR6X in 2021 with the ability to reach up to 24Gb/s. GDDR6X is powered by a revolutionary new PAM4 modulation technology for Ultra-Bandwidth Solutions. PAM4 has the potential to drive even more improvements in data rate. I'm surprised we're getting GDDR6X this soon, it feels like we just got GDDR6 yesterday. Sources https://videocardz.com/newz/micron-confirms-nvidia-geforce-rtx-3090-gets-21gbps-gddr6x-memory Official document PDFs from Micron (taken down): https://media-www.micron.com/-/media/client/global/documents/products/technical-marketing-brief/ultra_bandwidth_solutions_tech_brief.pdf?rev=16ecd1bb494f4a958810146858c02bab https://media-www.micron.com/-/media/client/global/documents/products/technical-marketing-brief/gddr6x_pam4_2x_speed_tech_brief Micron document mirror (thank to NuLuumo on reddit): https://drive.google.com/file/d/17cSkg9RzLne74FGN5jvcLGopS79p4b_m/view