Jump to content

Mira Yurizaki

Member
  • Posts

    20,911
  • Joined

  • Last visited

Posts posted by Mira Yurizaki

  1. 5 minutes ago, vexxeddragon said:

    My motherboard says it can support speeds at 1333 Mhz and I'm trying to buy sticks at 2400 Mhz. Is there a way to unlock those speeds or am I stuck with 1333 Mhz?

    You'd have to go into BIOS/UEFI and adjust the settings so that it configures the RAM to a faster speed. However, this does not guarantee that the RAM will be able to run at that speed. If the board says it's compatible only up to 1333, then it likely won't like anything higher.

  2. 1 hour ago, hishnash said:

    of course AMD can leverage chiplets for this if they want to. AMD using the same manufacturer as apple is also a benefit for apple since if apple want to they can use their buying power (much more powerfull than AMD) to get more time on the line. 

    I don't think it works that way, that Apple can make someone else's chip without some negotiation through the owner of the IP. Otherwise other companies could just ask for Apple's A series SoCs.

     

    And the thing with leveraging chiplets is yes, they can make it. But their current APUs are not designed as MCM processors. They'd have to make an entirely new design which doesn't really make sense when they should be leveraging what they have. Apple would have to fork over the money for that, which I bet they won't unless they make AMD also fork over the manufacturing rights for that SKU, because Apple.

     

    EDIT: I think I misread about getting more time on the line, but I don't think that would be beneficial to Apple. Apple has a massive volume of chips it needs to make and setting it aside for a market that's not quite as large as their mobile market isn't good for that. It may also cause internal conflicts.

     

    Unless Apple can have a second supplier up to speed to make all of the mobile hardware it needs, cutting into it to make room for another doesn't seem like a good idea.

  3. 25 minutes ago, hishnash said:

    I recon if apple asked AMD would be able to deliver. Since apple dont have that many diffrent laptops the volume of sales for each product line is much much higher than any other brand. So if apple were to go AMD for MBP 16 that would be more mobile chips than all the other AMD laptops combined.

    But AMD doesn't actually make the chips. TSMC does. And Apple already contracts them for making their mobile processors (I believe Samsung is a second supplier, but given Samsung is a bit behind, they're probably only used for lesser SoCs if any)

     

    Also the APUs are still monolithic:

    ENoXP5nUYAAmpei.jpg

     

    So AMD can't even leverage chiplets for this.

  4. 1 hour ago, MrPotatoFace said:

    So what is really the highest density storage available? If you had to carry petabytes of data in a backpack, what would be the best choice? All answers are appreciated!

    It is microSD cards. But the only reason why nobody talks about them for usage in any serious long term storage solution is because they're basically bottom-barrel flash chips so you're not getting high performance or high reliability that's suitable enough.

  5. Looking around, I'm only seeing cards that either have the USB A ports right there or a header, but not both. However, one thing to note is that USB 3.1 Gen 2 cards require 4 PCIe lanes to work (likely at least PCIe 2.0 speeds). Otherwise there's no real point.

     

    So looking at your motherboard, there's only one other slot that can provide 4 lanes, which is the other graphics slot. This will cut into the lanes the graphics card will get by half. The other x16 slot is really a PCIe 2.0 x2 slot. It probably won't matter since graphics cards aren't hampered much by going down to 8 lanes.

  6. 43 minutes ago, laptopsbetterthanbuilding said:

    1. does anyone use 2560 x 1440 ?

    I do on my desktop

     

    43 minutes ago, laptopsbetterthanbuilding said:

    2. if so what screen size is too small that it'd be hard to read font / text on sites ?

    Arguably that depends if you want to keep 96 PPI scaling or not. My phone is 2560x1440 but the UI is scaled properly.

  7. 8 hours ago, PianoPlayer88Key said:

    Speaking of "committed" or whatever ... is that how I would tell what my total memory usage INCLUDING pagefile would be?  Or is there another way?  (I know there's some place in Windows that will show you the size that's been assigned to pagefile (and also where you configure it), but it doesn't say what's actually in use.

    "Committed" is how much virtual memory space is available and in use. Virtual memory space is physical memory + page file size (which could be 0)

  8. 24 minutes ago, PianoPlayer88Key said:

    I wonder how quickly that 2nd one would load if, instead of the program being in the boot sector of a floppy disk, it was actually flashed onto BIOS. :)  Might be an interesting experiment for someone who has a motherboard with dual BIOS's or socketed BIOS chips - put some tiny games / programs on them, then boot to them and see how fast it loads. :)  (I have one that has both those features - ASRock Z97 Extreme6, but don't really have time or resources (like no BIOS programmer device / whatever) to do it.)

    It'll boot fast, considering UEFI sizes are still around 16-32MB. It just won't be very useful since there's few hardware interfaces that are simple enough to work with.

  9. Everyone appears to be sourcing this Tweet:

    I don't know about you, but a screenshot of what appears to be a random text file with code names doesn't seem indicative of anything other than just that. And I can't find anything that would lead this person to be credible about anything.

     

    Also poking at what other Tweets they posted that are related, it seems to only point to GPU technologies and related. To me, if this is from something in macOS's code base, this points more to a video driver file that had extra stuff hanging around than any indication that Apple is going to use an APU.

  10. 55 minutes ago, Rick09 said:

    I mean what job would that be?

    The job of specifically designing the look and feel of a product is Industrial Design. Though you could probably sneak your way in through the Human Factors field.

     

    Note that these usually require college education and/or years of experience to get anywhere near a position like Jonathan Ives had at Apple when he started.

  11. On an aside, HWiNFO recently went to an "averaging" method described at https://www.hwinfo.com/forum/threads/effective-clock-vs-instant-discrete-clock.5958/

     

    Quote

    It has become a common practice for several years to report instant (discrete) clock values for CPUs. This method is based on knowledge of the actual bus clock (BCLK) and sampling of core ratios at specific time points. The resulting clock is then a simple result of ratio * BCLK. Such approach worked quite well in the past, but is not longer sufficient. Over the years CPUs have become very dynamic components that can change their operating parameters hundreds of times per second depending on several factors including workload amount, temperature limits, thermal/VR current and power limits, turbo ratios, dynamic TDPs, etc. While this method still represents actual clock values and ratios reported match defined P-States, it has become insufficient to provide a good overview of CPU dynamics especially when parameters are fluctuating with a much higher frequency than any software is able to capture. Another disadvantage is that cores in modern CPUs that have no workload are being suspended (lower C-States). In such case when software attempts to poll their status, it will wake them up briefly and thus the clock obtained doesn't respect the sleeping state.

    Hence a new approach needs to be used called the Effective clock. This method relies on hardware's capability to sample the actual clock state (all its levels) across a certain interval, including sleeping (halted) states. The software then queries the counter over a specific polling period, which provides the average value of all clock states that occurred in the given interval.

     

    They also mentioned that AMD has their own proprietary way of measuring clock speed in Ryzen Master. I'm certain that Intel has their own proprietary way. All tools like Task Manager, HWiNFO, and others can do is poll certain things that the CPU exposes, average it out over time or report the value at that point in time, and hope for the best.

  12. There's several ways to measure clock speed, clock speed is constantly changing, there's only one number being reported and there's multiple cores that can be independently clocked, and Task Manager is polling at a rate of 1Hz, which raises questions about how accurate this really is.

     

    I don't think Task Manager is broken per se. The system just isn't feeding it reliable data or it's sourcing from something that's reliable enough under default conditions.

     

    I mean, for example, my CPU is set to a flat 100MHz x 40. No utility reports it as operating at 4.0GHz 100% of the time.

  13. On 1/28/2020 at 10:10 AM, SPARTAN VI said:

    There's a California Consumer Privacy Act that grants consumers the access to, deletion of, and sharing of personal information that is collected by businesses. Of course, due to Luke's non-resident (or citizen) status, I don't know how this would work out for him. All businesses that operate within California more or less must comply however. Last I checked, Blizzard is headquartered in Irvine, California not to mention the millions of customers they have here. 

     

    Whether or not they even need to share the reason for the ban as a part of this act is another matter entirely. I highly doubt this would be the avenue to get that information. 

    I'm going to wager that "reason for banning" isn't considered personal data under the CCPA. Besides, any data covered by CCPA must have the option to be removed and well, if you can make a company remove the "reason for banning" information, that opens up a can of worms.

  14. 9 minutes ago, Jae Tee said:

    So if I have this right: assuming I have enough, sufficient, or more likely lots of extra RAM, for a slightly better performance, I dont want to use swap.

    You don't want a condition where the computer is constantly swapping. However, having storage allocated to virtual memory space allows you to have less RAM than you would actually need to contain what you need virtual memory space wise. There's a balancing act between having too little RAM and just enough so you're not in a constant-swapping condition.

     

    Otherwise, you're going to need a lot more physical RAM than you actually would use if you want to run without allocating storage space and without running into an issue where you've ran out of virtual memory space, which doesn't imply all of it is actually being used.

     

    9 minutes ago, Jae Tee said:

    Does this make sense? And if so, how can i have control over this (preferably windows, but also linux)? 

    In Windows it's in Control Panel -> System and Security -> System -> "Advanced system settings" on the left pane -> "Settings" button in the "Advanced" tab > Performance section -> Advanced tab -> Virtual Memory

     

    I don't know what it is in Linux.

     

    However I'd advise to leave these settings alone. If you're not having a problem, there's nothing to fix.

  15. 2 hours ago, Rezoic said:

    Can anyone tell me the difference between a £400 1ms gaming monitor vs £100 1ms gaming monitor in terms of speed of monitor etc.

    Cause I see 1ms monitors that are expensive and some that aren't.

    Response time is a practically useless metric. The only difference between the two is likely what panels they use, what refresh rate they support, how many options you have to play with in the on-screen display menu, any other features like FreeSync it has, and how adjustable it is.

  16. 11 hours ago, Jae Tee said:

    Anyone know a good place where i can learn about swap and its relation to ram?

     

    Links please if allowed, preferably YouTube.

    I'll see if I can explain this in a clear manner.

     

    Way back when, RAM was expensive but people wanted to run more and more apps. To help alleviate an issue with not having enough RAM, OSes started to store data from RAM on the storage drive if it wasn't being used. The action of moving data back and forth between RAM and storage is called "swapping." This is useful in some cases, but if there's a constant amount of swapping going on, the system will start coming to a crawl as transferring data to and from storage is exceptionally slow.

     

    To support this feature, the concept of "virtual memory space" was created wherein the OS can make applications believe there's more memory than what's physically available. This helps simplify the app development process now because the apps don't have to know how much physical memory (RAM) is available. The OS knows and all the application has to do is make requests from "virtual memory". The OS handles translating these requests from virtual memory space to physical memory space. In Windows (Linux and others may be the same), virtual memory space is basically how much RAM you have plus how much in storage was allocated to swapping (this shows up in C:\ as a file named pagefile.sys).

     

    Of note, you can get away with running your computer without storage space allocated to swapping, but this presents a few problems:

    • Some applications actually refuse to run if no storage space was allocated
    • When applications want a chunk in memory space, the OS will often give the application more than what was requested. The idea is that the application will likely want more memory in the future and so it's to prevent the application's data from being all over the place in memory space. However, until the application uses it, the OS won't mark the memory as "in-use", only reserved.

      The problem with not having storage space allocated is when applications made enough reservations that it takes up all of the physical RAM. So the next time an application wants more memory, the OS goes "sorry, can't do that" and the application likely throws an "out of memory" error. When there's storage space allocated, the OS can shuffle the reserved portions to storage without much of a hiccup since there's no data on them to begin with.

    If you look at Windows' Task Manager -> "Performance" tab -> Memory page, if you look at the "Committed" value, that tells you how big your system's virtual memory space is and it tells you the total of how much memory all the applications have at least reserved.

     

    EDIT: tl;dr OSes allocate space in storage to dump stuff from RAM into if the data isn't being used. Swapping is transferring data in and out.

  17. 3 hours ago, PyroTheWise said:

    What is your experience with building systems and auto installs from manufacturers?

    My rule has been for the past 15 or so years is install only the drivers. If the additional software that comes with the drivers have something you'll actually use or, god forbid, is required to make the thing work, then install that as well. Like I never install GeForce Experience with my NVIDIA drivers, because it provides me with zero functionality I care about. But I have a Sound Blaster card that has software that's needed to configure it, so I have that installed. I also always skip motherboard utilities because all of that can be configured in the UEFI config anyway.

     

    But as far as "auto installing" goes, I almost never see that. But then again, I'm careful about going through the install process to make sure I'm getting exactly what I want, rather than click on "Next" mindlessly.

  18. 3 hours ago, Brennan Price said:

    I feel like there is something that a lot of people are wildly missing off for some reason... and that would be the specs to run something smoothly... which of course is down to the OS desired specifications. Sure, a quad core Intel Pentium and 2GB of RAM may struggle to run Windows 10 effectively but it would still work... whilst if Lubuntu or Slax Linux (for example) was installed on that same machine then it would run much better, not to mention that if the Windows machine had more RAM or a better CPU then it would also run better as well but... I'm coming onto my point now...

    I would argue a lot of what makes Windows feel bloaty is that a lot of extra things are turned on by default. Windows feels a lot snappier when you remove all of the UI glitz and tweak with it a bit. Like literally the only difference between Lubuntu and Ubuntu is Lubuntu uses a lighter-weight desktop environment and a different set of userland applications (presumably lighter weight versions). Otherwise it's the same as Ubuntu, yet Ubuntu recommends much higher system requirements.

×