Jump to content

Sebastian

Member
  • Posts

    38
  • Joined

  • Last visited

Everything posted by Sebastian

  1. Regarding the video "I have some things to say - Core i9 & X299": I was wondering about all this after seeing the Kaby Lake X specs, and the video did a great job of filling in the details that I hadn't yet bothered to research. I agree with Linus here that the X299 launch (plus LGA2066 CPUs) appears pretty rushed and haphazardly thrown together, driven by AMD's Threadripper announcements. That being said, I think there are a couple of silver linings here. First, for the extreme enthusiasts, I think this launch will be like any other. You'll still be able to buy a badass 18C/36T CPU and an equally badass mobo with all the possible bells and whistles. It's only the lower tierCPUs (particularly Kaby Lake X) that start to become questionable. The second possible good news is that Intel may already be in the process of taking Linus' free business advice. If we're lucky, then this particular launch will be a bit of a mess, but Coffee Lake will end up being a big step up. They've already hastily announced that it will give a 30% performance boost, though we'll have to wait and see what metric they're actually referring to. Intel may have been caught off guard this time around, but now that they've realized that AMD is finally putting out some strong competition, they may start pushing themselves harder. Ultimately, that's what everybody's been wanting for years anyway. We've had great improvements in GPU performance for years because there's been real competition in that market. Hopefully we'll finally see the same in the CPU market in the years to come (once we get past X299).
  2. I'm sure you're right, and I know LTT also uses similar sensors to measure case temps (e.g., in the Workshop episode where Luke tried filling the computer case with random junk to see how it would affect temps). I just mentioned it here because I've seen other tech sites showing IR photos of hardware where they've labeled specific temperatures at specific points and then said something like, "the IR camera says it's only 60 C on the backside of the PCB." That may be what the camera says, but that doesn't necessarily mean it's true. #alternativefacts Anyway, glad you enjoyed the post! I wasn't sure if ANYBODY would care haha.
  3. I thought this post might be worth making because I've seen a lot of hardware reviewers using thermal (i.e. IR) cameras to check external temperatures (most recently in the "Backplates cool your videocard" LTT video, but I've seen a lot of other websites do it as well). I want to make it clear that I'm not writing this post to attack LTT or anybody else, but instead I'm doing it to offer a bit of knowledge to the handful of people out there who might actually be interested. So rather than doing real work, I'm going to give a mini IR camera physics lesson on an internet forum instead. IR cameras don't have a way of directly measuring the temperature of objects in the way that a thermometer does. Instead, they measure the amount of thermal radiation coming from a surface (IR wavelengths in this case), and then use that intensity to calculate a temperature using what's known as the Stefan-Boltzmann Law. The equation looks like this: T = (I/(e*A*s))1/4 where T is the temperature, I is the intensity of the IR radiation entering the camera, e is the emissivity of the surface you're pointing the camera at, A is the area of that surface, and s is just a constant which we can ignore here. The important thing to note here is that there are other variables which go into this calculation, namely e and A. Now, the most important one, and the one I'm going to focus on is the emissivity e. Emissivity is a property which varies from one material to another, and is essentially a measure of how well a given material behaves like an ideal black body (i.e., how good the material is at radiating IR). It can range from 0 to 1, where 1 is the equivalent of an ideal black body, and 0 means that the object doesn't radiate any IR radiation at all. Most thermal cameras (including the FLIR ones that I think most tech reviewers use) assume that the emissivity is 0.9 or so. This means that if the object you're pointing the camera at actually has an emissivity of 0.9, then the temperature that the camera shows on-screen will be accurate. However, there are plenty of materials which do NOT have an emissivity of around 0.9. Metals would be the most relevant example. Metals tend to have very low emissivities (often less than 0.1). So what happens if we try to measure the temperature of, say, a copper surface with our camera? The emissivity of copper is around 0.05 (varies a bit depending on smoothness of the surface), but our camera is assuming that the emissivity is 0.9, which is WAY too high. This means that the temperature that the camera calculates will be considerably lower than the true temperature (see the equation above). Metals have another problem, and that is that they are also good at reflecting IR wavelengths. This means that when you point your camera at that piece of copper, some of the IR radiation that it's measuring is actually coming from somewhere else in the room and simply reflecting off of the copper surface and into the camera. These two different phenomena combine to make it look like the piece of copper is colder than its surroundings, when in reality everything is the same temperature. This is exactly what happened in the IR images shown in LTT’s “Backplates cool your videocard” video. In the images it looks like the copper region is much colder than the surrounding backplate, when in reality it was probably as hot or hotter. So there are a few conclusions that we can make here. First, one should take a grain of salt if exact temperatures are measured using a thermal camera. If the emissivity that the camera assumes is different than the emissivity of the object you’re measuring, then the temperature given by the camera will be incorrect. Second, we CAN compare relative temperatures between different regions, if we do it carefully. This can be done by putting a piece of non-shiny tape on each of the surfaces that you’re trying to compare. The tape will equalize to the same temperature of whatever it’s attached to, and because you have the same tape on both surfaces (e.g., a piece of copper and a piece of plastic), the emissivities will also be the same (because now it's the tape radiating in both cases) and the temperatures of the two objects will be directly comparable. You can even take this a step further if you want to get accurate temperatures. If your camera allows you to manually set the emissivity of what you’re looking at (many FLIR cameras do), you can determine the emissivity of the tape you’re using by putting some tape on a surface, measuring its temperature with a normal thermometer, and then adjusting the emissivity in the camera’s settings until the onscreen temperature matches the true temperature. Once you know the emissivity of your tape, you can then use that emissivity in your camera’s settings to accurately measure the temperature of anything that you put the tape on in the future.
  4. Well now I've finally had the time to do a bit of testing, and I can say that I saw a negligible difference in framerates for both Overwatch and Battlefield 1. I'm using a core i5 4670K @ 4.2 GHz (sad, I know haha), and a GTX 1080 Founder's Edition, with 8 GB of RAM. I tested both games at max settings at 1440p. Luckily, however, there WERE two good things to come out of this. First of all, since I tested the games on HDD first, and SSD second, both games are now gonna stay on the SSD, since I DID obviously notice a big improvement in loading times. Second, during my testing I noticed that ticking the "epic" preset in Overwatch caused the game to automatically set my resolution scale to 122%, which corresponds to around 42% more pixels. When I manually changed this to 100%, my average fps went from the 90-100 fps range up to the 135-145 fps range. Hooray! I recommend for all you Overwatch players out there that you take a quick look in your graphical settings to see if the game automatically applied a >100% resolution scale.
  5. If the graphics card doesn't have enough VRAM, wouldn't the system memory come next (before system storage)? This is why it seems so bizarre to me that storage should have any impact on FPS whatsoever. At any rate, I currently have BF1 on an HDD, so if nothing else comes out of this testing, at least I'll definitely have faster loading times!
  6. That's my initial reaction as well. Which is why I want to test it for myself, just in case I can get a bunch of free FPS :P.
  7. Hey everyone, This article recently popped up on my facebook feed: http://www.tweaktown.com/articles/7911/upgrade-test-gtx-760-vs-1060-ssd-hdd-system/index.html In it, the author benchmarks a few games using either a GTX 760 or 1060 combines with either an HDD or an SSD (i.e., 4 combinations in total). According to his results at 1080p, 3 out of 4 games (Desus Ex: Mankind Divided, Overwatch, and BF1) all had 20-30% framerate increases ONLY by switching from an HDD to an SSD. I have to say that this was pretty unexpected for me, since I assumed that, at least for online shooters like Overwatch and BF1, everything would be loaded into system memory and VRAM at the start of the match, thus making your type of storage irrelevant (except when it comes to loading times). I also feel the need to point out that they claim to be using an i5 760 (LGA 1156) on an H97 (LGA 1150) motherboard, and it's also not the first time I've seen some unusual things on Tweaktown. That being said, if moving some games from an HDD to an SSD can really improve performance so dramatically, I figured people here might want to know about it. I guess the only way to know for sure would be to test it myself, so that's probably what I'll do. Out of the four games tested in the article, I only have Overwatch and BF1, so I suppose I'll go test those out, then come back and post here again later, assuming anyone is interested.
  8. Well my original goal was maximizing performance per dollar, but I double checked prices and found an MSI GTX 980 Ti gaming for less than double the price of the cheapest GTX 970 I'd found previously. The extra 2 GB of memory will definitely be useful for us as well. Perk of having special purchasing arrangements between the university and retailers I guess. Time to go convince my boss that four 980 Tis are worth it! The software is called Mumax3. It's designed for simulating magnetism in micro- and nano-structured materials. And we've already been using it for a while. We've got a a computer with a 780 and a 970, plus another computer with two Titan Blacks, but with the number of people using the software, we still have people pretty much constantly waiting in line. So basically this build is more because we know it works well and want to add more GPUs to the cluster to take some load off of the others that we already have.
  9. I building a computer for work to run simulations on, and since the software we use is designed to use CUDA, we want to cram as many GPUs in as possible. As the resident Computer nerd, I volunteered to design and build it. Since we're more interested in bang-for-buck, I'm opting for four GTX 970s rather than the highest end cards (or Teslas, for that matter). When it comes to choosing the case, however, I really want to be sure that it will actually have space for four GPUs. I previously built a 3-GPU system for the same purpose, and discovered too late that it had one expansion slot too few, forcing me to do some creative modding. I'd rather not repeat that. I'm looking at Corsair's 750D Airflow Edition. Corsair doesn't advertise it as being capable of 4 GPUs, but it DOES have 9 expansion slots, which should be enough as far as I can tell. I haven't managed to find any blogs or forums online where somebody has used this specific case with 4 GPUs, but I found one where they used the Graphite 760T, which appears to be very similar. What do you guys think? Also, what are your opinions on reference 970s vs something like an MSI Gaming or Gigabyte G1 Gaming?
  10. Alright, well that seems like a more reasonable criticism then. I'm hoping that I disagree when I get around to playing it, though I haven't gotten a chance yet. I dunno, I haven't gotten into any RPGs in a while, so I'm hoping that I'll like this one.
  11. I don't think that people who are interested in this game are expecting it to be some sort of simulator-type game. It's an online RPG game, so of course you have bullet sponge-y enemies. Do people shy away from WoW because a guy surviving five fireballs to the face is unrealistic and "kills the immersion"? No, of course not! The thing that makes these games enticing (for some people, not everybody) is the progression. Getting loot, making yourself more powerful, fortifying your base, etc. If you want immersion go play a military sim.
  12. I don't really understand why so many people get their panties in a bunch over this stuff. I don't care if you don't like a particular game, it's not going to affect my enjoyment in the slightest. So many people seem to believe that a game's enjoyability is an objective value, when it is (by definition) subjective. You don't like Destiny or The Division or any other game? Fine, don't play them. But don't be an ass hole and go out telling other people that they can't enjoy something that you don't, especially when the player base statistics are staring you in the face, telling you that there are, in fact, a huge number of people who do have fun playing these games. And the whole point of video games is to have fun, right? Or did I miss something?
  13. Obviously it's a clipped version of the present progressive tense of the verb "to elf." I think he missed an apostrophe though, so it should really be " elfin' ".
  14. Does this mean we'll be able to play PS4 games with a mouse and keyboard? Seems like it could really screw up the balance in first person shooters...
  15. I think you make a really good point here. An item's value is far from static. Most people would argue that a small ham and cheese sandwich isn't worth $20... right up until they step off of a 10-hour plane flight, desperate for something to eat. Of course, this situation isn't exactly the same, but the point is that a product is worth as much as people are willing to pay for it, and people ARE willing to pay HTC's price for the Vive.
  16. Honestly I don't really see why people would have a problem with LMG over this stuff. Whether you trust Linus or not, there IS still useful information to be gained from the reviews. Even if he were secretly being paid to give a positive review on a product, it wouldn't change the numbers in the benchmarks. If you're looking at reviews and are seriously looking into buying a GPU, CPU, etc., you should: (a) Look at the benchmark data that's being presented. This is where the product in question as well as competing products are being most objectively compared. (b ) Look through the rest of the review article/video for other objective information (e.g., GPU X has a certain additional feature, but GPU Y doesn't). (c ) Look at multiple reviews (both professional and consumer reviews), in order to check that the OBJECTIVE information (i.e., the stuff from (a) and (b ) above) roughly matches up from different sources. (d) Take any subjective information (i.e., conclusions and opinions of the reviewer) with a grain of salt. The reviewer is just a normal person like you or me, and they're entitled to their opinion. Just because it's Linus or Jay or someone at Anandtech or Tom's Hardware doesn't mean that they should be put on any sort of pedestal. You should never trust somebody's opinion because they are in a position of power or because they seem​ really cool​​. You should take the objective information and form YOUR OWN opinion, regardless of whether you agree with the conclusions of any particular reviewer. I think that I've emphasized that the important thing in the reviews (especially from a potential buyer's perspective) is the objective information that is presented. And that's the real value of the LMG review videos: they are able to provide extensive and objective apples-to-apples comparisons of a lot of products. Only big reviewers have the resources to be able to compare thousands of dollars' worth of hardware in a series of benchmarks. And part of the reason they have those resources is because companies are willing to send their products to the reviewers. So to conclude, I honestly wouldn't really care if Linus or anyone else WERE being paid to give glowing reviews of products, because the numbers in the benchmarks don't lie. And anyway, I don't think anything like that is going on, because as Linus has already pointed out in this thread, behaving that way would be an incredibly stupid business decision.
  17. I like the look of the Zotac card as well. That being said, for your purposes I think that the Gigabyte card will be better. Like you said, it's probably a bit better suited for overclocking, and putting an EK block on will really reduce its size, which is an important thing to consider for a mini ITX build...
  18. I think all he meant was that PCIe 3.0 x4 has the equivalent bandwidth of PCIe 2.0 x8. The point was that the bandwidth of PCIe 3.0 x4 would not limit the performance of a modern GPU. The fact that it wouldn't work for SLI is a separate point.
  19. I think you're getting hung up on the 4x vs 8x. To summarize what other people in this thread have already said: For a single card, performance will be identical whether it's running at PCIe 3.0 x16, x8, or x4. So don't worry about that. For SLI to be enabled, each card needs to run at either x8 or x16. If one or more cards is at x4, you won't be able to turn on SLI. Z97 has 16 total lanes, which means that if you want to run dual cards in SLI, the only option is for them to both run at x8. This means that if you run SLI on Z97, you will not be able to put in anything else in a PCIe 3.0 slot, Period. The only possible way to use two or more cards in SLI AND a PCIe 3.0 SSD is to use X99.
  20. Like he said, if the code you're using it is able to run on the GPU, then the CPU you're using will be irrelevant. If, however, you have no choice but to run your simulations on the CPU, then you will almost certainly see a very large increase in performance going from a 4790K to a 5960X, simply because you'll have twice as many cores (even if those cores are running at lower clock speeds). As far as gaming goes, it really doesn't matter what processor you pick, especially if the 4790K is the cheapest option you're looking at. The only way you'd be able to "bottleneck" a Titan X with a modern processor is to run a game with fairly low demand in terms of graphics, and a lot of different objects moving around on-screen. But at that point, the "bottleneck" will just be reducing your stupidly high frame rates down to slightly less stupidly high frame rates.
  21. That's a good point. Still, you'd think they might make an exception to that for their extreme edition processors, since the people buying those won't care one bit how much power is being used, or how good the integrate GPU is. Let's see: https://www.pugetsystems.com/labs/articles/Core-i7-5960X-vs-4960X-Performance-Comparison-588/ It looks like the 5960X, 5930K, and 4960X are all trading blows with each other, depending on the benchmark. For the most part, it doesn't really seem like the per-core performance has changed much at all between the two generations. The only big increase in performance that stands out to me is the Linpack benchmark. As the author of the article points out, Linpack is able to use the AVX2 instruction set, which Ivy Bridge E didn't have. I guess a new instruction set IS a form of performance improvement, but not quite in the sense that I was thinking in the original post. Also, according to the LINPACK Benchmarks Wikipedia page: I guess what this all boils down to is ultimately money. As plenty of people have pointed out, without any real competition, there's no reason for Intel to push the technology as fast as they are able to. They just need to be able to stay a little bit ahead of the next best thing. And that's why I cheer for AMD, but still buy from Intel .
  22. Yes, you are right that a plasmon (in this context, anyway) is a coupling between the electrons in a metal and an electromagnetic wave (i.e. light) at the surface of the metal. It means that the plasmon can only exist at the surface between a metal and an insulator (like air). If it tries to move off into the insulating material, it dies out, because the electrons can't follow (they're trapped in the metal), and if it tries to move deeper into the metal, it again dies because the electromagnetic wave very quickly dissipates all of its energy. So the plasmon is a wave that's trapped at the surface, much like a wave trapped on the surface of a pool of water. It doesn't have to be the surface of a nanoparticle either. It could be a tiny wire, or a metal film. The interesting thing is that plasmons offer a unique way of manipulating light. But let's stop and look at fiber optic cables for a second. This is a light-based data application which has already existed for some time. Why? Because with light, it's possible to move around MASSIVE amounts of data very quickly, especially when compared with normal electronics. Think about the massive fiber optic cables that span across the ocean, connecting North America and Europe. But there is a catch. We can't shrink fiber optic cables down, which means we can't really build a processor using light and optical fibers. This is where plasmons come in. When you use light (which is a wave) to excite a plasmon (another wave) at the surface of a metal, both the light and the plasmon have the same frequency, which means that they can both potentially be used to transfer the same amount of data. However, a plasmon can be squeezed into a MUCH smaller space than the light by itself. The conclusion here is that plasmons have the potential to combine the high data density of light with the very compact circuitry we're accustomed to with modern CPUs.
  23. Haha, I think "microprocessor" is a bit of an archaic term nowadays. The fact that we're now seeing products that went through 14 nm processing seems like a pretty clear indication that we're well into the realm of nanoprocessors. And sadly, we are getting close to physical limitations for traditional technology, at least in terms of shrinking down individual components. I mean, we have individual transistors that are around 14 nm thick... that's only about 50 atoms across. When you get to that scale, the properties of the materials you're using start to change, sometimes quite drastically. So sometime in the next decade, processor manufacturers are going to have to change to something else. First they'll probably start building "up" and having multiple layers of transistors, like what we're starting to see in SSDs (thanks Xaring for pointing that out), but then eventually it might have to transition to a more fundamentally different technology. My bets are on light-based technologies (photonics and plasmonics) as opposed to the current electricity-based technologies.
  24. I agree with you, that's why I said "Moore's Law is the observation that...". I also agree that we'll get to a point where currently-used technology won't be able to offer any more improvements, because we will have reached a physical limit. We're getting fairly close to the point where individual (traditional) transistors can't be shrunk any further. That being said, we're still not at that point. Intel themselves have stated they they expect Moore's Law to continue until at least 2018 with the 7 nm process: http://www.pcworld.com/article/2887275/intel-moores-law-will-continue-through-7nm-chips.html Plus, the question I posed was about the past, where Moore's prediction HAS pretty much held strong. The fact is that transistor counts HAVE been roughly doubling ever two years, but CPU performance growth has been far from that. This seems like a more reasonable explanation to me. Especially since it also fits in with the performance trends we see in the GPU market. And if THAT'S the true explanation, then it's even more frustrating than the idea that on board graphics are stealing away our sweet sweet CPU core power...
×