Jump to content
Search In
  • More options...
Find results that contain...
Find results in...


  • Content Count

  • Joined

  • Last visited


This user doesn't have any awards

About daimonie

  • Title


  • CPU
    Intel i7-8700k
  • Motherboard
    Gigabyte Aorus Z370 Gaming 7
  • RAM
    G-Skill Trident Z RGB 3200 MHz
  • GPU
    Nvidia RTX 2070 FE
  • Case
    A desk
  • Storage
    triple Samsung 970 EVO
  • PSU
    Corsair RMx
  • Display(s)
    Asus ROG Full HD 180Hz
  • Cooling
  • Keyboard
    Steelseries 6GV2
  • Mouse
    Razer deathadder chroma
  • Sound
    Corsair Surround Void
  • Operating System
    Windows, (K)Ubuntu, Debian

Recent Profile Visitors

674 profile views
  1. I've set the PWM profile so that it doesn't do that as much. I've looked at my temperatures under different daily loads (browsing, gaming, working) and set the profile at mostly flat levels for these. So while it transitions from one load to the other it revs, but it doesn't rev during a workload.
  2. I'd be interested in seeing a waterblock review or comparison
  3. Numerous components (VRMs, chipset and others) don't need significant cooling - only a very small amount. In a sealed case, it can still propagate that heat through the air. In a vacuum case, it cannot so these components will heat up.
  4. The O-ring side of this fitting can screw into your radiator. On the other side of this 90 fitting you can connect a normal compression fitting. The hard tube will go on that fitting like on any other fitting.
  5. Wanted to get back to you on this As you said, the Tensor cores are meant to do D = AB +C matrix operations. (p15, 16 and Fig. 8 ). For RT Cores, I'll quote some stuff: "Due to its processing intensive nature, ray tracing has not been used in games for any significant rendering tasks. Instead, games that require 30 to 90+ frame/second animations have relied on fast, GPU-accelerated rasterization rendering techniques for years, at the expense of fully realistic looking scenes". p25 "While ray tracing can produce much more realistic imagery than rasterization, it is also computationally intensive. We have found that the best approach is hybrid rendering, a combination of ray tracing and rasterization. With this approach, rasterization is used where it is most effective, and ray tracing is used where it provides the most visual benefit vs rasterization, such as rendering reflections, refractions, and shadows. Figure 16 Shows the hybrid rendering pipeline." "Developers can also use material property thresholds to determine areas to perform ray tracing in a scene. One technique might be to specify that only surfaces with a certain reflectivity level, say 70%, would trigger whether ray tracing should be used on that surface to generate secondary rays." "Do not expect hundreds of rays cast per pixel in real-time. In fact, far fewer rays are needed per pixel when using Turing RT Core acceleration in combination with advanced denoising filtering techniques." "The RT Core includes two specialized units. The first unit does bounding box tests, and the second unit does ray-triangle intersection tests." After reading about BVH search, I'm actually surprised.This is a technique I'm slightly familiar with from computational physics (https://en.wikipedia.org/wiki/Octree). The 'winner' of our Computational Course was someone that used this technique to do a real-time openGL simulation of Andromeda and the milky-way colliding.. We all had done similar calculations at the start of the course, and the number of particles he had (using OcTree) was staggering. Also relevant is the difference between these methods (https://computergraphics.stackexchange.com/questions/7828/difference-between-bvh-and-octree-k-d-trees). To make the comparison: In an N particle simulation, you would make N* (N-1) / 2 calculations (pairwise) for each interaction. With the OcTree algorithm, you would be already aware of what particles are sufficiently far away to be negligible. You would still include them, but quite often you would start calculating "a galaxy" rather than "a large cloud of stars", which improved performance. So these algorithms are about reducing overhead. As I understand it from page 32, the SM launches a ray probe, after which the RT core will traverse the BVH tree and ray-triangle tests, returning hit (no hit) to the SM. At which point the SM knows whether or not it has to calculate anything. I imagine the process like: Probe: Will I hit something? RT core: Yes, you hit a cup SM calculates reflection Probe from reflection on cup: Will I hit something? RT core: Yes, you hit an inkpot SM calculates reflection Probe: Will I hit something? RT Core: Yes, you hit a light source SM: K, cool It's not calculating the path of a ray (It is straight, unless massive gravitational influence), but calculating what it will hit that is the problem. And that's what this is custom designed for. So what's the difference? Well, RT Cores are very different from Tensor cores. One thing is the size on the die. A tensor core is a small unit able to do a simple mathematical operation really, really fast. An RT core is able to massively reduce overhead in ray-tracing really, really fast.
  6. That's how it is supposed to work, as far as I understand it. But I also understood - I think from GamersNexus video - that they are applying filters somehow. I would have to check where that idea came from (I'm at work currently :()
  7. Seems like it. But then because BF enabled it later in their process, they had all those entities in place already. So things that shouldn't reflect - e.g. tree bark - were reflecting when they turned it all on (What I called "enable it everywhere). It's not a RTX problem, but just that they developed without RTX and enabled it later. Clearly you don't want to go refactor everything, so they tried to fix it by using some kind of filtering.
  8. I would phrase that quite differently: It has not been used because it was too expensive computationally The extra visual quality was worth it, but not for real time gaming. Anyone who plays competitive games will keep detail settings low. NVidia reached the point where 60 FPS is doable, and wants to get some of its R&D costs back The AI features of the cards (DLSS etc) aren't used yet, but seem promising Those same features are worth it for some contexts. I've used GPUs in academic research for RT calculations for e.g. a high frequency tricolour fluorescent microscope. Tensor cores would've been awesome. All of that seems fine to me. The RTX 2070 was competitive in pricing with the 1080, so I bought it. Edit: Regarding battlefield, it seems their implementation is quite bad. From what I can gather their "enable it everywhere" gave too big a performance drop, and they filter out RTX features to control performance. That gives weird artefacts in addition to being not consistent.
  9. I'm sorry, you seem to have misunderstood. I mentioned paraxial optics because it is a branch of optics that can formulate ray paths in pure matrix products. Apparently, it is referred to differently in English (Myoptics professor used archaic naming. Transfer matrices are the more common term. https://en.wikipedia.org/wiki/Ray_transfer_matrix_analysis) This was to argue for my question of what the differences between RT and Tensor cores are.
  10. While RPM doesn't translate to sound equally, your point seems valid to me. It does matter what particular fans they are and such.
  11. I'm still curious about how they are different. There's a branch of optics related here: https://en.wikipedia.org/wiki/Paraxial_approximation If you use those approximations, you can recast reflections, Snell's law and everything in terms of Matrix products (2nd order tensor products). So what's the difference between RT and Tensor cores?
  12. The bigger it is, the slower your fans can be. It's a combination of surface area and the flow rate of air through them. Larger surface area allows a smaller flow rate. There are 420 rads on the market, e.g. https://www.ekwb.com/shop/ek-coolstream-ce-420-triple
  13. If NVidia used Google's tech, that would make sense. But I don't know if they did. The price increase is small and seems fine for everything but the 2080Ti. I think we can all agree it went up by a larger amount (because it did), without a clear reason why. The reason might just be that the profit, if any, on the 2070 and 2080 are small (so you get more adopters) which they try to counter with the previous Titan segment of the market. As you can see in this simplified graph, the benefit of the old titan and the new rtx 2080 Ti are fairly small, even for cinematic games. https://docs.google.com/spreadsheets/d/1xGzdYeTiTEH_kJYwYAYdvyUHT8ZFLHYxhmReEjJvntU/edit?usp=sharing But aha, you say; RTX and DLSS! Well, sure, but those are true for the RTX series - making the 2080 Ti still weirdly placed. Edit: I linked the sources I used. I wanted to get a quick graph done, so I took the first single-source I could find for most cards. The MSRPs should be fine, they come from Wikipedia. Based on the graph, a logical place for the RTX 2080Ti would be around 1k.
  14. For those, the RTX 2070 wouldn't use the new fancy tech. Rather, it would trade blows with the 1080. The 1080 Ti is stronger than that; so if you are going for results at that pricepoint, GTX 1080 Ti seems the way to go.
  15. Oh, that's actually interesting. Did you wait for the liquid to equilibrate?