Jump to content

Mira Yurizaki

Member
  • Posts

    20,911
  • Joined

  • Last visited

Everything posted by Mira Yurizaki

  1. Mira Yurizaki

    Trying to break records and become the most pow…

    The older versions are going to depend highly on the CPU's performance anyhow because of dat bottleneck.
  2. The appeal to me for non-competitive games are: I don't have to devote a lot of time trying to "git gud", or rather, I can enjoy the game at my pace, not at the pace of the community People on the internet get super salty if things don't go their way Depending on how well developed the game is or what its progression system is like, I'm going to be at a disadvantage the longer I don't play or the later I start from scratch Some games don't have matchmaking, and even then matchmaking can be floaty This is why FFXIV was the first MMO to really appeal to me. There's plenty to do by yourself if you want (and even then, you can run a lot of that content with a group) and the endgame isn't PvP focused. And before you go "lol, who wants to play an MMO solo?"
  3. If it is converting DX9 into Vulkan, welp. ----- So I don't know if it's really a case of "Windows vs. Linux" or "DirectX vs Vulkan" so much as it is "old vs new"
  4. Outside of taking almost three slots, it looks like a standard size for a high-end card. This is unlike say EVGA's higher end SKUs which have 5" tall cards (the XC Ultra looks like a 4" tall card)
  5. That's actually a good thing. If you get a tax return, you gave the government an interest free loan.
  6. You can implement a standard in a terrible way. Example, the Pentium 4 is a terrible implementation of x86.
  7. When I'm bored, I think about why communication standards measure in bits per second rather than bytes per second:

     

    1. TVwazhere

      TVwazhere

      So it's NOT  just because bigger number is better? Interesting

  8. When you look at the bandwidth of a communication bus or interface such as USB, SATA, or the speed your ISP advertises they give you, you notice that often times they measure everything in bits per second instead of bytes per second, a figure we're more used to. The common reason we think that companies advertise the bits per second is because it's a larger number. And obviously larger means better to the average consumer. Confusingly as well, the shorthand versions for bandwidth looks similar enough. Except there's a more likely reason why you see bits per second: in the physical aspect of communication, data isn't always 8-bits. Let's take for instance every embedded system's favorite communication interface: the humble UART (universal asynchronous receiver/transmitter). The physical interface itself is super simple, at most all you need is two wires (data and ground), though a system may have three (transmit, receive, ground). However, there's three issues: How do you know when the start of a data frame (a byte in this case) has started? What if you were sending a binary 0000 0000? If you were using a 0V as binary 0, the line would look flat the entire time so how would you know if you actually are getting data or not? How do you know when to stop receiving data? A UART can be setup to accept a certain amount of data bits per "character," and so it needs to know when to stop receiving data. Do you want some sort of error detection mechanism? To resolve these: A bit is used to signal the start of a transmission by being the opposite of what the UART 'rests' at. So if the UART rests at a value of 0, the start bit will be whatever the value of 1 is. A bit (or more) is used to signal the end of a transmission. This is often the opposite value of what the start bit is in order to guarantee at least a voltage transition takes place. A bit can be used for parity, which is 0 or 1 depending if the number of data bits are 1 is even or odd. Note error detection mechanisms are optional. A common UART setting is 8-N-1, or 8 data bits, no parity, 1 stop bit. This means at the minimum there are 10 bits per 8 data bits (the start bit is implied). This can be as high as 13 bits per 9 data bits such as in 9-Y-2 (9 data bits, using parity, 2 stop bits). So if we had a UART in an 8-N-1 configuration, if the UART is transmitting at a rate of 1,000 bits per second, the system is only capable of transferring 800 data bits per second, or an 80% efficiency rating. Note: Technically it's not proper to express the transmission rate of a UART as "bits per second" but "baud", which is how many times per second the UART can shift its voltage level. In some cases, you may want to use more than one voltage level shift to encode a bit, such as embedding a clock signal. This is used in some encoders like Manchester Code. But often times, baud = bits per second. Another example is PCIe (before 3.0) and SATA. These use another encoding method known as 8b/10b encoding. In this, 8 bits are encoded over a 10-bit sequence. The main reason for doing this is to achieve something called DC-balance. That is, over time, the average voltage of the signal is 0V. This is important because communication lines often have a capacitor to act as a filter. If the average voltage is higher than 0V over time, it can charge this capacitor to the point where the communication line reaches a voltage that causes issues such as a 0 bit looking like a 1 bit. In any case, like the UART setting 8-N-1, 8b/10b encoding is 80% efficient. This is all a long explanation to say the reason why communication lines are expressed in bits per second than bytes per second is bits per second is almost always technically correct, whereas bytes per second is not.
  9. NVMe is faster then, if that's what you're after. Or you can use Thunderbolt if you want to keep it external.
  10. NVMe is communication protocol currently used by SSDs. If you're saying the SSD is some other interface (SATA? USB?), then maybe booting will be faster, but like a second or less. The process of booting is mostly spent initializing all of the hardware, drivers, and applications than loading stuff in from storage.
  11. Short answer: NVMe is faster than USB 3.1 Gen 2 Long answer: Were the B's capitalized or no? A single PCIe 3.0 has a raw bandwidth capacity of 985 megabytes per second (MB/s). Four lanes would be 3,940 megabytes per second, which after some conversion would put it at about 3.94 gigabytes per second (GB/s). But multiplying 3,940 megabytes by 8 to get megabits gives us 31,520, which you can fudge with some numbers and rounding and say it's 32 gigabits per second. USB 3.1 Gen 2 is rated at 10 gigabits per second, which 1.212 gigabytes per second (the conversion is accounting for an encoding method that uses some bits, so not every bit is useful) In any case, a capital B is bytes, a lowercase b is bits.
  12. I was more alluding to what GPUs were designed to do: crunch a bunch of data at once using the same instructions. The cost is higher latency. So doing a smaller number of operations, especially those that are likely not doing the same instructions, on GPUs is an incredibly inefficient way of doing things. It's like you don't spin up a mass production facility to do a one-off prototype.
  13. The device you linked is for turning a dumb speaker into a Bluetooth speaker. It's not for transmitting audio out to a Bluetooth headset or speaker.
  14. The QVL is basically "we tested with this RAM and found the optimal configuration settings for it." It may mean you have to do some tweaking to get the most out of any RAM that isn't on the QVL.
  15. More random thoughts of the day! Well more like one.

     

    RDNA 2 GPU goodness

    Considering Microsoft confirmed the Xbox Series X uses RDNA 2, and it's basically all but confirmed going to be in the PS5, and with both companies spouting out ray tracing capabilities, it looks like this year AMD will hop on that ray tracing bandwagon. And it looks like RDNA 2 will support other DirectX 12 features like Variable Rate Shading.

     

    So it looks like Navi was AMD's Maxwell 1. Hopefully this means games in the future will start taking advantage of these new features.

  16. They can tarnish traces or any other exposed metal, but that's about it. You can also plug the PSU in and touch the metal housing once in a while.
  17. I think this hearkens to the point they seemed to design Bulldozer for server workloads primarily and hoped what they did there would trickle down to the rest of the market stack. Even then, Intel was still winning if not tying in standard tests. Maybe in games where DirectCompute was a thing, but most non-3D applications use the CPU for FP operations because there's not enough FP operations to justify running it on the GPU. GPUs are good for crunching a metric ton of data at once. They're not good for crunching a single 1.0 + 1.1 you entered in Calc.
  18. From what I gathered after people tested Bulldozer in applications today is that the major reason why it failed was because nobody was really using software that could take advantage of it. Yes, 8 cores sounded amazing on paper but this was at a time when most people had around 2 cores, with 4 cores in second place. However the shortcomings were mostly from the following: Deeper pipeline, which impacts branching mispredictions. Bulldozer had a 20 cycle penalty whereas Sandy Bridge had 14-17. Cores that shared the same instruction decoding front-end. It had a peak instruction decode rate of 4 instructions per module (which has two cores), whereas Sandy Bridge had 4 per core. The individual integer cores were weaker than the previous generation Memory latency was worse than either Phenom II or Sandy Bridge overall. I believe Bulldozer was designed for servers in mind first. Servers tend to run applications that are designed to scale across cores, and typically aren't exactly computation heavy. Now that more applications are taking advantage of higher core counts, Bulldozer is sort of seeing a resurgence.
  19. Since you're using an NVIDIA GPU, enabling Fast Sync gets you no tearing but you'll also get the latest frame that was rendered before the next display refresh cycle. Plus I think all Turing GPUs support integer scaling. So it might look better at 1080p on a 4K monitor with that enabled.
  20. Exclusive fullscreen, at least according to Unity's recommendations, is useful if you're running low on video resources or you don't want the GPU rendering the desktop. Which in this case, if you're running a multi-monitor setup, exclusive fullscreen becomes sort of a moot point on the latter reason. Just keep your games on full window borderless if you want to switch between apps easier. You also get an added bonus that Windows forces triple buffering and vsync on all desktop windows, so you're going to have no tearing and minimal input lag.
  21. I don't see how switching apps has anything to do with demanding a hotkey to switch which full screen mode you want as standard. Is there some technical reason that makes switching full screen modes better?
  22. Because nobody wants to have a hotkey they can accidentally press that causes non-trivial amounts of disruption, especially for something that is typically set-once-and-done anyway. Also Alt+Enter was the hot key for a while.
  23. Hardly anyone makes barebones solutions and those that do, you'll end up paying more for the laptop than buying it outright from a system builder. What requirements are you looking for? EDIT: At best you could probably find a used Lenovo laptop that still has a socketed CPU. Then you can configure it however you want from there. But again, it's not exactly a cheap solution.
×