Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Mira Yurizaki

Member
  • Content Count

    20,088
  • Joined

  • Last visited

Everything posted by Mira Yurizaki

  1. Did I have to provide a "this is a hypothetical" disclaimer? Because you know, it was hypothetical. Then how come a 1.86 GHz Core 2 Duo achieves better performance than a 3.73 GHz Pentium XE 965? EDIT: One more game because this game was definitely developed during a time when single core systems were basically the only system: That's pretty bad. Technically speaking, a single thread game doesn't exist. At least not in Windows. A game that runs all of its task synchronously sure, but not a single thread. In any case, I don't see what this has to do with anything. Having more threads doesn't mean anything depending on how the software was designed and implemented. Modern games may have as many threads as earlier ones, but because modern ones have better software design backing it, things are simply more efficient. If anything, what's really been the bottleneck for PC gaming are getting graphics up to speed. DX11 and OpenGL are primarily single task-based API. There's only one GPU task list generator and that's where most of the graphics bottleneck is in a lot of games. This is practically the whole point of DX12 and Vulkan. But even then, switching to it simply doesn't give you better performance if the performance bottleneck wasn't the graphics subsystem to begin with. If SMT is not going to be a thing for the next generation consoles, it's because SMT ruins deterministic behavior. Games are soft-real time applications, and the basic requirement for having a stable real-time application is having the system be deterministic as possible. When you're on a platform where performance can be limited and you need to pull out every trick in the book, knowing how the system behaves within a high degree of certainty is a huge benefit. That's... what they're doing now. That's what they've been doing. At least Sony has since the PS3. If there's any reason why PC games don't seem to utilize the CPU often, it's because the task is too easy to do on the CPU and Windows isn't going to fire up more cores if one or two cores can handle it. It's the same reason why if you play an old 3D game on a modern GPU, the GPU isn't going to crank up everything to max speed, because there's no point in generating a million FPS* other than for craps and laughs. *exaggeration Okay, do you? Maybe you should do it if you know better than people who've been in the industry for decades.
  2. Who knows? All I can guess is Intel is leaving the processor open for orders.
  3. The only thing that really tickled me was Ice Lake and the Gen11 GPU: https://www.anandtech.com/show/15092/the-dell-xps-13-7390-2in1-review-the-ice-lake-cometh And maybe DXR being supported on GeForce 10 so I could play around with the tech demos.
  4. I'm going to guess the old keyboard is rubber dome. In this case, most rubber dome keyboards are built with one giant sheet of rubber sandwiched between the PCB and keys. This acts as inherent spill protection because the liquid doesn't immediately come into contact with anything electronic. If we look at this keyboard for example, all of the important electronics are at the top. If the fluids flowed towards the bottom (and a lot of keyboards are tilted up at the rear), then it won't touch the electronics. This is contrast to mechanical keyboards where there is no rubber sheet over the key PCB. It's just bare PCB with a switch soldered to it, then the casing goes over that.
  5. The difference that I'm seeing is AMD continued to make and offer them to the general public market. Here Intel is saying "if you want the Pentium G3420, tell us now so we can make them. Otherwise that's it." This is just for this thread in general because I decided to actually read the source material. The product in question is indeed the Haswell based Pentium G3420, but the product code is CM8064601482522. Looking this up brings me to http://www.cpu-world.com/CPUs/Pentium_Dual-Core/Intel-Pentium%20G3420.html which says that CM8064601482522 is an OEM/tray part, meaning it's meant for system builders. If an OEM wants to continue building/servicing computers with this part, Intel is giving them a warning to make their orders.
  6. Part of me doesn't think they do that because public software releases, at least if you're observing good software development practices, require you to do a regression test for each release. But then again, maybe they release the Media Creation Tool with a "you better know what you're doing" disclaimer.
  7. Or someone who wants to sell systems in "lesser developed" or whatever countries. Like how there's the China-only GTX 1060 5GB
  8. The ISO Microsoft provides is likely only the release version the latest Windows 10 build they're offering. You'll very likely have to install updates.
  9. The flaw with this is not accounting for processor performance or what the thread is really doing. Consider a quad core, 2.0GHz processor and a single-core 8.0 GHz processor. Same specifications otherwise. And there's a 4-thread application that runs on them. Unless the threads all have the exact same amount of work, the single-core processor is going to have higher performance overall minus the context switching overhead. Do you ever wonder why despite the PS4 and XB1 having 8-core processors, 8-cores has never been a requirement for games? It's because those 8-core processors have much lower performance compared to say a Core i5 of the time. The sheer per-core performance of a desktop class processor was more than enough to make up for the lack of threads they could run at the same time. And do you wonder why Ashes of the Singularity, a game that is known for scaling well with multiple cores, doesn't have a console port? It's because Ashes actually requires that level of performance that the consoles can't deliver. This is in general and I hate to ask this, but do people even know exactly how SMT works?
  10. I don't see how this relates to an app having more threads than the number of threads a CPU can run being a problem. If an app has more threads ready to run than the CPU can process at once, at worst all that means is the CPU can't keep up with the processing demands of the application. There's no additional overhead for having more threads available to run than the number of threads the CPU can run at once. Threads either run for their time slice or they yield and stop early. If threads are constantly going on the CPU and yielding shortly after, that might be a problem, but that's more of an issue with how the software was designed/implemented.
  11. I'm going to need a source that says Intel is intentionally crippling i7s or whatever to turn them into i5s.
  12. If you're increasing the radiator size, all other things equal, then the CPU temperature should drop.
  13. And what if i5s are just rejected i7s that couldn't pass muster? If you're going to argue that Intel should do better QA, then go tell AMD to do a better job on their ends too because they have plenty of products that are basically "the same silicon as the higher end variant, but less feature rich"
  14. This is the only reason. If someone is asking for it and is paying whatever price Intel is offering it at, then Intel will make the product.
  15. I wonder if the issue with LCD image interpolation is because the images are sampled at the pixel level, rather than the sub-pixel level. And if so, if sampling them at the sub-pixel level would change things.

     

    http://entropymine.com/imageworsener/subpixel/

  16. I feel like this is a problem. A lot of people have this mindset that PCs are the only computers around, even though practically every electronic device we use today is for all intents and purposes, a computer.
  17. It doesn't matter because the qualifier to this is Intel will make you a Haswell if you ask for one. It's not like Intel is manufacturing Haswells, putting them on store shelves, and hope they sell. It's why I said in the first place Intel was still making 80186s and 80386s well into the mid 2000s. They weren't selling them on store shelves, they were selling them to people who were making embedded systems. They had to actually ask Intel (or a supplier) for them.
  18. If you have the materials to do it, why not do a dry run and test it yourself? It's not like you need a computer system.
  19. It's not a matter of what process size the nodes use, it's a matter of features. A future Pentium or Celeron with missing features, whatever those may be, isn't going to sit will with some people.
  20. Then I don't understand how your reasoning makes sense. I'm pretty certain Intel is just offering system builders and other people who use Haswell for some reason or another a last chance to order batches for replacements or whatnot, then they'll just shelve it up and you can no longer buy Haswell. If anything, that'll mean their 22nm fabs can shift to something else. And it looks like two of their main fabs still produce 22nm parts (https://en.wikipedia.org/wiki/List_of_semiconductor_fabrication_plants)
  21. Many apps that are running are likely using more than a dozen threads. Not every thread is running at the same time. If you have an application that's compute bound, then sure, you should limit the number of worker threads to the number of threads available in the system.
  22. If you're implying Intel is going to go back to Haswell for future Pentium or Celerons, I don't really think that's going to be the case.
  23. I don't think people will spend much time, if any, optimizing specifically for SMT. The problem SMT is trying to solve is a thread didn't use all of the CPU's execution units, so it goes and find a thread that could make use of the remaining ones. The problem regarding optimizing specifically for SMT is that the number of execution units is often different between microarchitectures. If you optimized for Skylake, performance won't be as good for say Haswell or Zen. If it just so happens your tasks can scale that well, then it's a happy coincidence.
  24. What kind of AIO are you requiring? Because Corsair made the H5 SF AIO specifically for small cases. If you want something more traditional like a 120mm rad, the smallest case I've seen with one installed is Silverstone's FTZ-01/RVZ-01/RVZ-03/ML-03 (it's basically the same case with different fascias)
  25. Intel's 80186 and 80386 were available well into the mid 2000s because they were used in embedded systems. I think the 386 lasted longer. But I think the sticking point is this: Intel will only make them if you want them.
×