Jump to content

xFluing

Member
  • Posts

    357
  • Joined

  • Last visited

Posts posted by xFluing

  1. Just now, Cereal5 said:

    The issue I see with this is that in each die, just like a CPU, a GPU has many cores in it. It's not just one big core, there's hundreds or thousands of cores. It would be like using a server mobo that has multiple CPU sockets. I'm not totally sure how those types of boards work, but a GPU would have to do that all in about 1/10th of the amount of PCB. I think it would be easier to just print a bigger die and charge more. However, if the Titan V is anything to look to, it seems like the technology is around to still stuff way way more cores into a die. No matter what happens, the price will increase. Whether or not the technology does. Inflation's a bitch, lol

    Inflation has nothing to do with the costs I'm talking about. Wafers cost a lot so fitting as many dies onto it as possible greatly increases the yield and implicitly reduces the cost.

     

    The RTX 2080 price does not follow any inflation curve either, so still no inflation.

  2. 2 minutes ago, GoldenLag said:

    AMD is working on something like that for datacenters. For regular gaming and similar workload that involves drawing frames is very hard to achieve. So far AMD has stated they can do a a consumer MCM design based on current designs is impossible and making a design able to do so is something they havent been able to do so far. 

     

    They cant apply infinity fabric tech to GPUs in other words.

     

    Something they could possibly do is a modular GPU core and build accordingly.

     

    Interestingly Intel is looking at modular die tech atm.  They are currently looking at it CPU side, though it is probably applicable on GPUs aswell

    Well Intel got the idea from AMD's CCXes but it's different enough, they're looking to make the cores on 7nm and the rest, less important stuff like memory controller on 14nm or 28nm which I have to say is pretty smart because those two processes are already mature enough for high yields and low prices, and because the cores will be made individually it'll allow for much smaller dies once again reducing the cost.

  3. Just now, Ross Siggers said:

    AMD already did this, admittedly a long time ago. The HD4870X2 had two chips on a single pcb, I can't remember if they've done it since though.

    295x2 also had two GPUs but they were running in crossfire, I'm talking about something that would have say 4 GPU modules, and each of those would act as a CPU core or something and working together as one big GPU unit made of many "cores", removing the issues of crossfire / sli.

  4. 2 minutes ago, bleedblue said:

    Multiple GPUs to work together? You mean SLi/Crossfire? That was a thing way back when, Asus Ares and Mars cards IIRC.

    Nah, I'm talking about some kind of tech that's not shit, something that would make the GPU modules work as one singular GPU unit instead of taking turns rendering frames or each one rendering half a frame.

  5. So, now that we are getting closer and closer to the end of the manufacturing process shrink, how will GPUs evolve from here on out? The only solution with current technology would be bigger dies, sure, but that will just drive up the costs because the bigger the dies, the less of them fit a wafer, and implicitly drive the costs higher and higher.

     

    Unless we somehow find a way for multiple GPUs to work together like Ryzen CCXes, I'd say the GPU speed will just hit a plateau and stay there until GPUs become powered by quantum processors or some shit.

  6. No matter what I use to record video (OBS, shadowplay, relive) it always seem to be picking up the equalized audio as well, is there a way to capture the raw audio instead?

  7. I have this Athlon x4 860k system on an MSI A68HM-E33 (V1) motherboard. For the longest time, I've run this dual channel 4GB setup with some cheap 2GB modules (running at 1.5V, pretty standard), fast forward to a few months ago when a friend gave me two 4GB modules (total of 8GB), a Corsair XMS3 set (rated at 1.6V), however after putting it in I started having issues booting with it, even with XMP on (which made the RAM run at the rated 1.6V), I then realized I couldn't run dual-channel anymore, not even with the old 4GB kit.

     

    The corsair kit would just refuse to POST (and RARELY if it POSTS it only shows 4GB, but board explorer still sees that both slots are populated), while the old 4GB kit would result in the monitor showing artifacts that would go for about 3 rows then cut off in the middle of the next one, so now I'm stuck in a single channel 4GB stick.

     

    Is there a way i can fix this? Did I burn off my second DIMM with this ram?

  8. This old Quadro FX 3450 (equivalent to a 6800 gt, I think, it's the same form factor so I assume it is) I used to game on this until 2015, so yeah you can tell it was a nightmare with games like path of exile being one of my favourites back then, even Half Life 2 would have issues with it.

     

    I couldn't use it in a retro system even if I wanted to sadly, because it suddenly started artifacting by itself after 2 years in storage.

     

    Full sized pics in spoiler below.

     

    IMG_20180806_155930.jpg

    Spoiler

    IMG_20180806_155939.jpgIMG_20180806_155954.jpgIMG_20180806_155930.jpg

  9. 9 hours ago, Underwrought said:

    When I want accurate and detailed I watch GN, when I want entertainment I watch LTT.  Its weird and kinda not good but this falls under meh for me.

    That's the exact problem though BECAUSE linus is more entertaining to watch, ofc he's gonna have a much bigger audience than GN and should ESPECIALLY pay attention to stuff like this to avoid confusion.

  10. Just now, porina said:

    Look at the bar lengths. Each vertical marker seems to be 2. That puts the 1300X at 12.x and 16.x, and the 8700k at 13.x and 17.x. Both rounded to 13 and 17. So complain about them boosting the Ryzen numbers.

    The takeaway point was that there was no practical difference. 

    Then why make the takeaway point seem like some rigged graph, again, either have them of variable length without rounded numbers, or without any number altogether, or have them at the same length if rounded, it's not about the point they're making, it's about the implications of such a small error, this is how misinformation gets spread, through small errors everyone copies.

  11. 1 minute ago, porina said:

    This was covered somewhere already. Reason given was along the line that the lengths are correct, but the numbers are rounded for presentation.

    Thought it might be, but why not just give the non-rounded numbers, otherwise it can cause uproars, like this had already done.

    This still doesn't absolve them of this goof, either round the numbers and have both bars the same length, or DON'T round the numbers in order to keep bars at different lengths.

  12. Recently I've come up with a system to tier Nvidia's cards to clear up the waters on what video card is what.

     

    This list calls the video cards by their real names, you can figure out by yourself which is which, it's pretty easy.

     

    GT 1020

    GT 1030

    GTX 1050

    GTX 1055

    GTX Doorstop

    GTX 1050 Ti

    GTX 1060

    GTX 1070

    GTX Doorstop 2

    GTX we-don't-want-to-give-you-the-full-GP-100-chip-even-though-we-should-have-because-this-is-the-80-class-which-should-be-the-full-dye.

    GTX we-still-don't-want-to-give-you-the-full-GP-100-chip

     

    I hope this clears stuff up and makes it a bit easier to differentiate between each of them.

  13. 18 minutes ago, Theguywhobea said:

    Yeah I notice the framerate dip at Thousand Cuts as well. I notice when looking at the ground the framerates goes up as well as GPU usage, but looking at the bandit camps tanks the FPS as well as the GPU usage, seems to be a definite CPU bottleneck.

    Those CPUs you have in your sig should not bottleneck the game, unless there's something odd about that xeon

  14. 51 minutes ago, Tabs said:

    Your memory and CPU are both bottlenecks right now. Assuming Windows' memory compression is on you're only left with 3GB or so for the game, and the compression itself takes some CPU cycles.

     

    What resolution do you play at? Any effects that tax your CPU or increase your memory load will have increasingly large effects on your framerates, especially for an open world game like Borderlands, and especially on your minimums.

     

    Any of the nVidia-specific stuff should be disabled, since even on nVidia GPU's most of it is CPU based.

    Physx is disabled, so yes I have that off right now.

     

    Yeah I thought my CPU would bottleneck, also I play at 1440p since I saw no difference between it and 1080p, so I thought might as well run in 2K

  15. 1 minute ago, TVwazhere said:

    Are you running out of RAM? (actual ram not Vram)

    I don't know, it may very well be a possibility, maybe I can set rivatuner to tell me if it starts writing to the page file, 4GB of RAM is starting to be limiting, however I'll soon change the whole platform so buying ram right now would just be a waste.

  16. Specs are in sig

     

    Yes, yes "welcome to the BL2 runs poorly club" so here's the deal:

     

    It runs very well most of the time, I'm only having issues in large areas like Thousand Cuts and Tundra Express (and though large, The Highlands run smoothly) (and oddly enough in Sanctuary even though the city is small), so I have another theory: I'm having trouble when there's large areas with lots of stuff happening.

     

    Before, I'd have PhysX on, and in Thousand Cuts it would drop in the low 20s, I've since disabled it and performance has improved, it drops to 45 now but still not perfect.


    I've also disabled the cell shading which also boosted the performance by a lot, however I'm still getting these drops, which makes me suspect it's a CPU bottleneck.

     

    I tried fondling around in the config files tweaking every single setting I could think of, but still did not have much success.

     

    So I've come to the conclusion that this may very well be a CPU bottleneck at this point, even though my CPU is way beyond the recommended settings.

  17. 14 hours ago, Crunchy Dragon said:

    Different cards have different VRAM. To be perfectly clear, the 1060 6Gb is more of a 1060Ti than a 1060.

     

    If Nvidia had kept their naming scheme from 500-700 series, we would have the

    GT 1030, GTX 1050, GTX 1050Ti, GTX 1060, GTX 1060Ti, GTX 1070, and so forth.

    That's wrong, the 1060 3gb is a cut down version of the 6gb, so 6gb version is 1060 and 3gb is 1055 TI or something (1055 as in something between 1050 ti and 1060 6GB)

  18. With the recent Intel trends, I would really like to see direct die cooling make a comeback, there's just something about seeing a bare CPU, granted you won't see it when the cooler is on anyway, it's just an aesthetic thing I like.

     

    Now, there's the risk of cracking the die with uneven pressure like GN pointed out, but I think that could be fixed with pads to take some of the pressure off the die, like you'd see in old Athlon CPUs

  19. 2 hours ago, TheEvanFactor said:

    I am using a ryzen 1600 and 8GB of ram. I use the computer for games like Fortnite, Overwatch and Fallout. Currently I play on high settings but I would be fine with medium.

    Dude a card like that with games like that you can run them no problem in 1080p, even an RX550 is made for 1080p in those games (except maybe fallout)

  20. No offence, but I personally think naming your computer is kinda going overboard to the point of being dare I say "cringy", then again that's probably because most of the names I've seen were edgy 12 year old names like "PROJECT: X" and stuff like that, nothing wrong with just calling it "my PC" or "my rig".

×