Jump to content

Mira Yurizaki

Member
  • Posts

    20,911
  • Joined

  • Last visited

Everything posted by Mira Yurizaki

  1. To add to this, just because a processor is x86-64 compatible, doesn't mean they run the same way: https://en.wikipedia.org/wiki/X86-64#Differences_between_AMD64_and_Intel_64
  2. Intel's stock coolers have never really been all that great. A $30-$40 cooler like the Arctic Freezer 34 will do wonders.
  3. It's subjective. If you're fine with the sound of traffic, then you're fine. There is however, an upper limit you shouldn't cross to avoid permanent hearing damage.
  4. It's also fun knowing the macOS kernel is open source. And then everything else on top of it that Apple created isn't.
  5. I think this is also related to people seeing lower temperatures due to using a better cooler. There's also the idea that say, for example, an AIO is always better than air cooling as far as cooling performance. They go run Cinebench once or maybe prime95 for 5 minutes and say the result to prove their thinking without realizing water has a much higher specific heat than metal.
  6. Mira Yurizaki

    idk how people seem to have passion for hardwar…

    My passion is more about how it really works. What you see on tech websites and marketing is only scratching the surface.
  7. While per-component power draw readings were nice, they didn't do system total until after the rig was built. Having power draw before would've been more meaningful. Especially because of the glaring difference between Core and Ryzen at idle. Which the only explanation I can think of is Ryzen is more of a system-on-a-chip.
  8. My assumption is a smaller GPU means less bandwidth is needed to feed it. Something like an RTX 2080 Ti only needs 600+ GB/s of bandwidth because of all the execution units it has to feed. If you break them up and have them separated, presumably you don't need anywhere near that much per chiplet, but combined yes. EPYC has to feed what is essentially 8 Ryzen 7 dies with its I/O chip. You can do the same technique with GPUs to get a combined VRAM pool rather than try to make separate GPU modules that work together in an internal SLI fashion. AMD solved the NUMA issue by making memory go through one source.
  9. Just a random one I like to throw out, which to me at this point is approaching ad-naseum territory. The myth is basically "NVIDIA does not support asynchronous compute" They do, at least since Pascal, because the requirement for asynchronous compute, as defined by AMD themselves is (emphasis added): And then there's this graph showing my GTX 1080 using two different work queues (or engines, as Microsoft calls them)
  10. This is probably more of a problem with YouTube, but I've noticed recently that if I play an embedded YouTube video, I can't pause it. Even when I do the keyboard shortcut, it'll show an icon saying what mode it's supposedly in, but it'll still keep playing. Browser is Firefox 70.0.0.1 and no extensions are enabled.
  11. Something that looks like what AMD is doing with Zen 2's Threadripper and EPYC, then do something like checkboard rendering such that each GPU chiplet renders say every nth pixel or block of the screen to avoid the problem of uneven workloads. The lingering concern I had was memory access can be indeterminate, but I think that's not really the case with GPUs, considering their workload can be determined ahead of time. So as long as the drivers can schedule memory access such that by the time the GPU chiplets are ready to work on the next thing, they'll have what they need, then the system should work. The issue to overcome here is that current multi-GPU contain the same dataset in each of their VRAM pools. If you eliminate that, then I see an MCM-based GPU more plausible. Like if we take an RTX 2080 Ti with its 6 GPCs and make each GPC a chiplet, it won't necessarily need more bandwidth than what it normally uses if you can find a way to make them all use the same VRAM pool
  12. If the memory usage is going up to 95%, this is probably your issue. You're running out of memory and the system has to page stuff in and out to storage, which can cause massive hiccups.
  13. Mira Yurizaki

    Pop Quiz Time to Guess the Company! Get it righ…

    I would answer, but I don't think you'd count me ?
  14. Intel doesn't need to copy someone when they've done the solution before.
  15. Either would work, though I'm more partial to USB based ones on the grounds that: I can easily use it on any computer (I don't know if this is a concern for you but I liked the option) I can move the receiver around easily in case the rear I/O isn't in the best of directions to receive the Wi-Fi signal. If you're using a metal case, it can still block and interfere with Wi-Fi reception. The only caveat is that the USB adapter should be using USB 3.0. Especially 802.11ac ones.
  16. I tried Sony's WH-1000XM3 wireless noise canceling headphones at Best Buy. Holy moly, it was like I stepped into an anechoic chamber.

     

    Unfortunately I won't buy it, because headphones like that make my ears uncomfortably hot. But I did pick up the WI-1000X NC earbuds, they perform close enough, and they're definitely a lot better than the Sony wired NC earbuds I had.

    1.   Show previous replies  1 more
    2. WereCat

      WereCat

      Yep. That's why I bought them. I still use my Sennheisers most of the time while using PC as I don't ever sweat in them. 

      Sony for phone. 

  17. From https://www.tweaktown.com/news/69186/nvidia-next-gen-hopper-gpu-arrives-ampere-smash-intel-amd/index.html This to me looks more like "SLI on a package" than say what AMD is doing with its recent processors.
  18. Another thing to poke at with "why Apple didn't go with AMD" Let's compare Apple to say Dell, who doesn't have to deal with the development of an OS or even drivers. They may do a few things to add their own branding to it, but for the most part, once the item is shipped, it's very likely that Dell hopes that a company's IT department to keep the software side running and they just handle the hardware itself. This is why Dell can supply hardware of differing configurations. They're not the ones who have to deal with making sure the OS and its drivers work with the hardware other than passing a smoke test. If there is a problem, they go complain to who's in charge of said OS or drivers and they take care of it. Apple on the other hand, does not have this luxury. They have to make the system software that sits on top of their hardware, with the possibility of developing and maintaining some drivers. The way I see it, Apple would rather prefer to have as few configurations of hardware as possible. It's a logistical and maintenance concern, especially when they're in charge of the entire product stack.
  19. I think it's still a problem that a term is used different contexts, and people going confused cat when people say that X is not Y, even though Y in a different context means X is Y.
  20. I have a G-Sync monitor, so I suspect there's some strange interaction with trying to make one monitor line up with the frame rate while the other is still going at 60Hz or whatever.
  21. A caveat, I'm under the impression you can't run any linker and get an executable. Object files represent the binary for a particular target, so if you have something compiled for say ARM, running a linker meant for an x86 Windows target won't work.
×