Jump to content

SaperPL

Member
  • Posts

    300
  • Joined

  • Last visited

Awards

This user doesn't have any awards

3 Followers

Recent Profile Visitors

2,181 profile views
  1. I know it's not exactly right place for this suggestion, but I don't see a dedicated feedback thread started and maintained by the LMG staff, so I'm hanging it on the "providing value to the community thing" stated in the opening post. Maybe there should be a separate dedicated thread to the process feedback. Seeing some GPU test articles pop up on the Labs page, I would like to suggest figuring out a unified physical measurements of the card system with some reference drawings. And a corresponding way of checking how much space is there in the case/chassis GPU compartment. The importance of it is that it's such a mess that we don't know how the card is actually measured on vendor's product page if there's no drawing showing it. And even within one vendor site we get cards that are measured differently - some are measured from the beginning of the PCI bracket (it's fingers) to the top of the card, and some will be measured from the pci-e connector PCB begging to the card top. Then again it's the same question whether the bracket is counted in the card length. On the thickness side it gets tricky as well - you can measure overall thickness and call it a day, but it'll be safe assuming that all cards are offset from the back of PCB the same amount of space, so full 5mm of allowed clearance at the back. But what if the card's cooler retention mechanism doesn't need those 5 mm for spring tension screws and thus the card is sticking out at the back less than 5 mm, so let's say 2 or 3 mm. Then such 40mm "2 slot" card may actually not fit correctly next to the another extension card that uses all that 5mm of space or in a mini-ITX/sff case where card thickness is limited. Another issue is that how much space is needed for the power cables. Some cards will have PEG sockets/connectors recessed to the reference height pci-e add-in card PCB, but some oversized - taller PCB cards will have them placed higher than that which affects how much room the card needs beyond the actual height of the card. And it gets even more important with those new 12-pin power connectors with their limited bend angles - having a number that shows how much space you actually need for your card in the case would be good. From my perspective, the MVP approach would be to include a drawing showing how you are measuring those dimensions on the questionmark tooltips next to the parameter name and sticking to a unified way of measuring the cards and cases, but if you're going to such lengths of CT scanning the cards, then maybe it would make sense to have a process that shows the exact dimensions of the cards on their photos/scans. Finally - you don't have an exact vendor product ID on the test page nor a link to the vendor's product page, just the product name, and that's a problem - cards can have different dimensions of the cooler between variants even to a point where v1 will be a 2.5 slot card and v2 will be 2 slot card. It will be important for other components such as SSDs and memory kits where one product name can have different SKUs with different components internally, and may be varied between distribution regions. For example 16GB G.Skill RipJaws V 3200 MHz CL16 kit would have up to 5 different SKUs according to this community research on ryzen memory compatibility from back in the day: https://www.reddit.com/r/Amd/comments/62vp2g/clearing_up_any_samsung_bdie_confusion_eg_on/
  2. I agree with most of your comment - yes, it's wasteful, but wifi support is a bit tricky. On ITX it should be a standard feature because add-in card for ITX gets tricky. There was a time where you could purchase the card later for a lot of itx boards, but for some reason it didn't make much sense to add a slot for someone who wasn't going to use it anyway. And now I'd say that a lot of people will use WIFI and not care about running cables for ethernet around the house, so it's a mainstream feature. Two different types of ethernet though - you're 100% right here - that's a premium feature for specific users and we're forced to pay for that even if we just want the top tier chipset. This reminds me of the issue with not being able to get a reasonably priced entry-level laptop with the best APU and without the dedicated GPU for a long time, because vendors did match those as expensive SKUs together with laptops fully spec'ed out to the max, so there wouldn't be an affordable entry level laptop with APU for online gaming.
  3. There were cases having just a single Type-A port connected to this 20-pin header and nothing else? I think a lot of cases had a single 3.0 port and some various things connected to the other like a hub that had multiple 2.0 connectors and a card reader or something like that. The 2.0 header was really nice. USB is still a BUS so it made sense to connect two ports to a single header by default. Also I kind of don't get the requirement of having type-C on the front of the case, except for if we were to drop support for type-A altogether. If you need the bandwidth really often, the motherboard should have a type-C at the back and you'd use that. for anything else, Type-A to Type-C cable seems reasonable. Yes, the type-A has the downside of having to plug it in in correct orientation, but still a lot of devices use it.
  4. The implementation brings more issues than solves in the first place. First of all, just making a standard of boards that have all case connectors on the one edge and facing sideways could be enough with cases having some kind of surface hiding the connected cables from the front, but the whole point seem to be make it proprietary for each vendor, so if you'll want to upgrade, you'll stick to the vendor you previously got. Especially with the GPU power connector from ASUS - isn't/wasn't the motherboard type-E header for type-C front panel like proprietary connector of ASUS? We still get lower end boards without this connector, so a mainstream case still has to support the terrible 20 pin USB 3.0 header. Those should be already long gone and replaced by two type-E connectors in place of one 20-pin header. If the standard for GPU power connector will not open up quickly, we'll end up with those GPUs that only fit specific boards from ASUS, and these probably will not even fit other type of motherboard unless that's an ITX + some adapter onto this power connector sticking outside of the motherboard PCB. And all it could have been would be to place the 8-pin or 12-pin connector in a stealth cavity on the inner side in a way that you connect the cable before plugging in the GPU into slot. And potentially this could've been made in a way this card also has the same power connector on the outside covered with some plastic plug and a dip switch selecting where does it take power from. Whole thing will just bring more pre-builts that will end up being more like HP/DELL/Lenovo systems that have proprietary standards and have limitations in how you can fix or upgrade them.
  5. The thing is, those crashes are not bluescreens, at least in my case nothing from yesterday/today is visible in bluescreenview. It's like the GPU is doing something that triggers some kind of failsafe protection (in the PSU?), so the system doesn't even have time to log anything.
  6. I could play it for most of the time as well, but had like one or two system crashes in the meantime that I assumed were something else, until yesterday when it started happening constantly. My first guess is that there's something in the driver shader cache that might be accumulating over time with this new driver and depending on when this issue happens and how often this thing from shader cache is accessed, it may affect whether this crashes the application, whole system or whether you'll just have minor artifact etc. My second guess is that it may be affecting people who did not do a proper clean driver install for a long time and there's some wierd juxtaposition against some leftover things from some older driver.
  7. Summary Nvidia 545.84 driver seems to cause many issues ranging from flickering black boxes on the screen, application crashes and full system crash when GPU load is spiking. Quotes reddit thread accounts that rolling back to 537.58 fixes problems or just that the issues started occurring after driver update: Multiple quotes with similar issues can be found on nvidia driver feedback thread as well, just people don't discuss rolling back to previous driver there. My thoughts I stumbled upon this because I had this issue. Initially I thought it was a CPU issue (recently upgraded) or GPU issue (started thinking my old RTX2070 is failing) because I started having some flickering black boxes on the screen while watching videos in browser and had a system crash two times recently, but yesterday I started having repeatable crashes and today I had 100% repro on launching a game. Found the reddit thread, tried just changing the driver through geforce experience to the creator version that is previous driver version and it didn't solve the problem, but doing a proper cleaning with DDU and installation of previous GRD version did fix it. What is noticeable is that I had a 100% repro that was shutting down my PSU completely when launching a game, not a system crash and reboot, but my PSU did shut down completely and I had to start the system again with power button. And I have a system configuration that should be okay on 400W PSU, that is running on 700W Gold PSU that's barely 2 years old. Multiple people are stating just restarts, so maybe my case of full shut down is unique for some reason. I suspect that the driver may be causing some audio crackling issues as well, which started happening for me around the same time, but it's too soon for me to tell if this issue is gone. I think this is important to be publicised both for people to know about the issue and avoid this driver and also to put some focus of nvidia driver team on solving the situation quickly. Sources https://www.nvidia.com/en-us/geforce/forums/game-ready-drivers/13/528784/geforce-grd-54584-feedback-thread-released-101723/
  8. Please check if perhaps you're on the nvidia 545.84 driver: My pc started having such behavior yesterday after some time of playing when the GPU load was raising while I was joining the next session in online game. Today I had 100% repro on just launching another game to a point that power supply just shut down and didn't reboot.
  9. Would have to investigate this extensively though to see if the 80% of those negative reviews are set to off-topic unfairly or not, because if those 80% of players just wrote things along the lines of "get rekt gaijin" then probably Steam has to react this way and not count those reviews as proper reviews. And I would expect a lot of people to be lazy and not write much in the review.
  10. As someone who spent a lot of time in this specific game, I'd like to summarise the problem in some reasonable short form: The general outrage is about gaijin slowly boiling the frog by making grind harder and harder again, and what lit the fire was the "we know what's better for the game and for the player" approach to damage control in the first place. They have a track record o handling outrages, stepping back for a moment and then introducing those planed changes anyway piece by piece, step by step anyway over time. Most of the players are angry about the economy being built in a way that you cannot keep playing the vehicle that you have unlocked because you run out of in-game currency which is spent on ammunition and repairs, and you collect it for good scores in the match. You cannot have good scores in the match when you have a new vehicle that is stripped of the better part of its functionality so you need to "spade it from stock" meaning you need to unlock it's features, so you keep running in circles by running out of in-game currency and having to go back to lower tiered vehicles to farm currency in order to play the best vehicles. Their PR responses to this issue states that the reasoning behind it is to make sure players won't progress too fast and they'll progress forward once they got skillful enough with the vehicles of current tier - which would be fine if: they would actually invest into teaching new players new mechanics at each tier extensively rather than purposely pushing them into frustration they were not selling premium top tier vehicles which explicitly allow new players to play high tier gameplay right away without the required skill for this level, which effectively destroys the game for others playing at those tiers, and those new players as well if they don't know the mechanics well. They are talking about continuously processing the feedback while most of the community is disillusioned about the game forums being way to forward feedback while it's just a facade. There are processes set up on the forums for passing the feedback that leave feedback rotting in the forums for ages with hardly ever getting passed to devs because it's throttled down to one suggestion per category per month. Classic divide and conquer causing players of each game modes argue who's got it worse. Additional notes from me as my personal opinions: The game employs F.O.M.O. and Bait & Switch tactics on time-limited events and sales of vehicles. The game network architecture is not really what company stated (although years ago), it runs as peer2peer where clients simulate game state and dedicated servers decide who's right and merge the outcome into what then is stored as server replay. The game updates keep faking upgrading the content every two or three major updates - VFX & SFX by bumping the sounds' power (not a sound guy) and emissive color on flames (so it burns brightly) and explosions to steadily fade them back over time. It's kind of like the endless staircase sound effect… Client replays are not representative of what happens in your game locally and there is some post processing hiding malice and errors before storing a client replays. You could get back to cover and the game would ignore that move and you'd get killed with killcam and client local replay both showing that you were standing there in the open. You have to capture footage live to have a proof of shenanigans happening in the game. The game rigs odds in matchmaking and in-game damage simulation (& few other things) against the tasks for seasons to artificially slow down the progression in them to extend the session play time. Game's PR mostly ignores arcade game mode, supporting mostly realistic mode-oriented content creators, so effectively more vocal realistic battles community is treating players playing arcade as just a small fraction of newbies on training wheels that should jump over to realistic modes, while server replays history proves there's roughly the same amount of both players. The arcade shows the flaws of the game better and having many eyes constantly on it with footage from live game could quickly disillusion too many players about what the game is. I have no idea how this game got certified for consoles and is in the best on deck on steam when the controller support is terrible, moreover the kbm keybinding are cluster to handle. The game could be a best combined arms game on the market if not for predatory monetization strategies and piles of core game issues never being addressed
  11. Not taking sides in this argument, but it's just a correlation and not evidence. Also the type of job is different - often creative (software engineers) vs often not creative (math teachers). Also software engineering is a job connected to creation of a product that is later repeatable and can be sold multiple times and sadly the society is not valuing teachers the same way. Finally how would creating a union even if it was a union of workers in specific workplace, hurt mobility? Just curious what's the angle here.
  12. I'm not arguing about whether it's good or bad - it's a matter of preference. I remember it was really cool in NFS Undeground
  13. I agree that this is a valid use, but the sensation of speed may be achieved with change of camera fov while you're accelerating and various games are doing exactly this and using some simple blurred lines as "wind", but not exactly motion blur. I feel like Linus did omit this part of the problem while he focused specifically on overuse of blur in non-vehicular games where you walk and have blur while turning around...
  14. For the record - not like 30 out of 30 sample size had the issues, but I'd say at least a dozen out of 30. For first and second generation of Ryzen it was hard to get exactly the sticks with chips from motherboard QVL for memory, but for third (3600X/3700X) and fifth gen (ryzen 5600X) it was possible, but still the issue occurred on some of the systems. I shuffled various kits on systems that had the issues and switching kits didn't really help - new kit would progressively cause more issues when running XMP and even after downclocking to 2933 which would normally keep them stable from day 0, they would have issues, less than at full XMP but still. I'm running right now on F4-3600C16D-32GVKC from G.skill and it had my board (Asrock B550 Phantom Gaming ITX/ax) listed back when I purchased this kit: http://web.archive.org/web/20220507193306/https://www.gskill.com/qvl/165/184/1562831784/F4-3600C16D-32GVKC-QVL Screenshot (wayback machine slow) But now the support for this kit specifically on Asrock boards is not not there anymore: https://www.gskill.com/qvl/165/184/1562831784/F4-3600C16D-32GVKC-QVL And it's not like I only had this problems on asrock boards, I had issues on boards from asus and gigabyte as well. Anyway the problem with QVL in my region is that a lot of kits available in stores are pretty close by their vendor ID, but not exact matches, they seem to have dies from the same manufacturer, but it was still hard to get an actual QVL match for most of the AM4 generation.
  15. At 23:15: The hot take about Ryzen having issues and Intel being better - Not saying either one is better, but I have experienced a lot of XMP issues on all AM4 generation CPUs (I've built over 30 Ryzen systems myself, and no, I'm not counting building same systems over and over and over again, but bulk orders) and also I've seen people having similar issues in comments under memory kits in our local stores and also on reddit. I did not have chance to check it on AM5/DDR5 yet though. I've seen multiple people stating they RMAd everything and came back to same issue with whole new set of hardware and AMD support wasn't helpful, so they're not trying AMD cpus ever again. So first of all - reddit community around Ryzen was fully in fanboy mode whenever I tried discussing the issue and showing proof that the issue exists. Also the fact that this problem doesn't seem to exist for youtubers is also a problem for whoever encounters it. Secondly - what is the problem - When running XMP above 2933 MHz for two sticks and above 2666 MHz for four sticks the system becomes more and more unstable over a span of between a week and up to few months. Browser tabs crash, explorer, apps crash and there's a fancy lockup when everything already running is running perfectly fine, but the system cannot start a new process. The similar if not the exactly same problems occurred in various configurations - different XMP, old vs fresh system, big cooler vs low profile cooler, expensive memory sticks from QVL list and some cheap sticks, boards from different vendors and so on. Thirdly - the state of memory (in system) seems to degrade over time, I had some sticks being literally unusable in windows after running them for a month on XMP, but they pass memtest perfectly fine. After tons of tinkering around with various bios settings over years with those issues, I'm at loss at figuring it out. My guesswork would be that there are regions that are getting either sub-par quality chips for memory kits or the CPUs, OR there's some tiny detail in the setup of those system that even I am such big of an Idiot to miss that over and over again for years, so everyday Joe buying a Ryzen system is not protected from this issue by perfect idiotproofing.
×