Jump to content

wo0t

Member
  • Posts

    16
  • Joined

Awards

This user doesn't have any awards

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. IMG announced its new PowerVR GPU architecture with 7.8 GRay/s. That would be 26 times faster than the PowerVR GR6500 and would eliminate the need for raster and hybrid rendering. As a dual GPU card (like the Caustic R2500) and/or multi-GPU system RT performance could be increased even further. "This video highlights the new features and benefits of the IMG DXT GPU using a real-time in-house developed demo. IMG DXT aims to bring ray tracing to the mass market by enabling high visual quality using limited ray budgets." Whitepaper: https://resources.imaginationtech.com/hubfs/gated-files/raytracing/powervr-photon-whitepaper-en-jan23.pdf "Scalable to desktop and data centre (up to 9TFLOPS FP32 and over 7.8GRay/s)" https://www.imaginationtech.com/news/imagination-launches-the-most-advanced-ray-tracing-gpu/
  2. Please build the worlds first system with four watercooled RTX 3090 to achieve 3000 Mrays/s. For testing real-time path tracing on this machine you may want to get in touch with the original author of the Brigade Engine: https://jacco.ompf2.com/lighthouse-2-rtx-path-tracing-benchmark/ https://ompf2.com/viewtopic.php?f=8&t=2158 Brigade 2 demo: https://www.guru3d.com/files-details/brigade-2-demo-real-time-raytracing-engine-download.html https://web.archive.org/web/20130323204243/http://igad.nhtv.nl/~bikker/files/brigade2_r2017.zip
  3. If you have a 3080 or 3090 can you post your results please? AFAIK nobody tested real-time path tracing on Ampere yet. Thank you. https://jacco.ompf2.com/lighthouse-2-rtx-path-tracing-benchmark/ https://ompf2.com/viewtopic.php?f=8&t=2158
  4. Did some1 already test the new Ampere GPUs with the Brigade-Renderer? Two Ampere GPUs should be fast enough for at least 40 rays per pixel in 720p@30fps.
  5. LTT has not reviewed holodeck technology yet. Running the "Book of the Dead" Unity demo on it should be great. Unity version 2018.2 is recommended for compilation. An already compiled version can be found here. A64FX: Is ARM already faster than x86 on the desktop? Benchmarking file compression, Blender, video editing and, of course, Crysis on an A64FX should be interesting. The A64FX runs Linux and Windows. If LTT can't get an A64FX system, renting an instance could be an alternative. PowerVR ray tracing: Reviewing ray tracing from Imagination Technologies would also be interesting. Thier hardware is much more effective than RTX. Here and here is why.
  6. XMG made it possible and built the fastest laptop: https://www.youtube.com/watch?v=jrTiE1va79c 16 cores@LN2: https://www.youtube.com/watch?v=jrTiE1va79c&t=870
  7. In some benchmarks a 3950x at 65W(like the 2700 in the Acer Predator) is 2x faster than a 2700x. It's 7nm. https://hothardware.com/reviews/amd-ryzen-9-3950x-zen-2-review?page=4
  8. Has someone already tested whether it is possible, to successfully put a 3950x CPU into one of those AMD Ryzen laptops? For example the Acer Predator Helios 500 DTR with a Ryzen 7 2700 Vega 56 GPU. I guess a new BIOS is needed for that and you may have to run it with the 65W "ECO" mode for some minor performance loss.
  9. @M.Yurizaki It does not invalidate the previous data because Nvidia itself says that Turing is only 6x times faster than Pascal. That Nvidia understates the ray tracing performance of its GPU and that in Blender the 2080 Ti is going to be more than 6x faster is highly unlikely. Also there are not too many variables, your suspicion is unfounded and there is enough data to asses the ray tracing performance. If you have other ray tracing benchmarks let us see them, otherwise you are only guessing. You don't understand that dedicated hardware can be much more effective than GPGPU. Mining was already mentioned. Another example is dedicated video decoding and encoding (for example Quick Sync in Intel CPUs). Pure ray tracing makes use of both: Cuda cores and RT cores. RT cores alone are useless. The RT cores only speed up a part of the ray tracing rendering pipeline that is too thorny for the CPU or shader programs. "RT Cores accelerate Bounding Volume Hierarchy (BVH) traversal and ray/triangle intersection testing (ray casting) functions." Also power gating is done so that used parts of a chip can clock higher utilizing the TDP limit. That means that the Turing chip does not power parts not needed for the task so used parts can be clocked higher utilizing the TDP limit. So your assumption is wrong for 2 reasons. @tikker I'm not saying "a 200 W chip is automatically 100 times faster than a 2 W chip"! I wrote: "A 2080 Ti needs ~100x more power for the same raytracing performance" and "With at least 2 years more development time, a much higher budget, a much better 12nm process, 260W of power usage and a much bigger chip size (754mm²) I would expect at least 10x more ray tracing performance. It is the biggest gaming GPU chip ever made. Only the Titan V chip is bigger." Of course, a 200 W chip would not scale perfectly but it is astonishing that Nvidia needs 100x times more power for only the same ray tracing performance. Nvidia claims Turing is only 6x faster at ray tracing than Pascal. From dedicated hardware we should expect much more. As you can see in the benchmark Nvidias "10 jija rays" for Turing and "1.21 jija rays" for Pascal are pure marketing BS. In reality it is only 426 million rays/s and 112 million rays/s. More than 5x faster than the 980 Ti would be ~355 million rays/s for the PowerVR GR6500 which is much closer to IMGs number. Real 10 giga rays/s would mean 33x more rays than the PowerVR GR6500 and ~80 rays per pixel on 1080p and 60fps. In that case Nvidia would not need any denoising and could show use real-time ray tracing demos instead of hybrid rendering. But even at hybrid rendering Nvidia needs denoising and nevertheless "RT ON" causes huge fps drops. @mr moose Your car comparison shows that you don't understand the topic. @DocSwag The topic is ray tracing performance. Blender is heavily optimized for CUDA i.e. the 980 Ti. A ray tracing benchmark is also a graphics benchmark. If you have other ray tracing benchmarks let us see them, otherwise you are only guessing. The benchmark in the wccftech article is a DX12 ray tracing benchmark from Microsoft and was released weeks after the Anandtech review. 100x less power for the same performance means a lot. It means that the PowerVR GPU architecture has much more potential for even more ray tracing performance. For example by also using the 12nm process and/or increasing chip size and/or doubling or tripling the clock speed and/or by putting more chips on a single board. The 260W 754mm² Turing chip on the other hand is already maxed out and Nvidia has still a lot of work to do to improve its RT performance. Dedicated RT hardware can be much more effective.
  10. TDP values are not meaningless and I'm not making it sound like all power is being used for RT. The PowerVR GPU also has a lot more other pieces inside that use power. Just look at the picture you posted. But both GPUs PowerVR GR6500 and 2080 Ti are dedicated RT. I know that the benchmarks are different, but because in both benchmarks 980 Ti were used and because of Nvidias claim that Turing is 6 times faster in ray tracing than Pascal we can use this data to compare the ray tracing performance. That Nvidia understates the ray tracing performance of its GPU and that in Blender the 2080 Ti is going to be more than 6x faster is highly unlikely. But if you don't believe Nvidia and those benchmarks, you have to wait till Blender is making use of the RT cores of the 2080 Ti. That does not change the fact that a 2080 Ti needs ~100x more power for the same ray tracing performance than the 2W PowerVR GPU in 28nm. With at least 2 years more development time, a much higher budget, a much better 12nm process, 260W of power usage and a much bigger chip size (754mm²) I would expect at least 10x more ray tracing performance. It is the biggest gaming GPU chip ever made. Only the Titan V chip is bigger.
  11. You are misleading! I never wrote that it "can do ray tracing 100 times faster than an RTX 2080 Ti". I wrote: "A 2080 Ti needs ~100x more power for the same raytracing performance than the 2W PowerVR GPU in 28nm." The conclusions are not misleading. The video shows that thier 2W dedicated RT GPU is more than 5 times faster in ray tracing than a 980Ti. https://www.youtube.com/watch?v=ND96G9UZxxA&t=2m In the wccftech article you can see that the 260W 2080 Ti is 6 times faster in ray tracing than the 980 Ti. https://cdn.wccftech.com/wp-content/uploads/2018/10/NVIDIA-DirectX-Simple-Ray-tracing-Benchmark-RTX-2080-Ti-RTX-2080.jpg So the 2080 Ti needs ~100x more power for the same raytracing performance. The PowerVR GPU and the 2080 Ti both are dedicated RT. The 980 Ti only serves as an aid to compare the ray tracing performance of both GPUs. Back in 2016 there was no 2080 Ti available. I hope this explanation was comprehensible for you. By the way, also in my opinion it is astonishing that Nvidia can't do better although they had at least 2 years more time, a much higher budget and a much better 12nm process.
  12. Years before Nvidia IMG showed the worlds first dedicated raytracing GPU and even fully raytraced realtime demos (not only hybrid rendering demos like Nvidia). A 2080 Ti needs ~100x more power for the same raytracing performance than the 2W PowerVR GPU in 28nm. LTT should try to get a PowerVR GPU and run some raytracing benchmarks on it. Real-time raytracing in Unreal Engine and Vulkan: https://www.youtube.com/watch?v=Xcf35d3z890&list=PLnOXj03cuJjmRN_Y8aN0vUH_jNbjyDXjB&index=1 https://www.imgtec.com/blog/video-ray-tracing-powervr-wizard/ "The initial stages involved adapting the engine’s render pass mechanism to perform a ray tracing scene build operation. The Vulkan ray tracing extension API makes this very easy, as the code flow required is very similar to an existing Vulkan raster render pass, requiring only wrapping the render sequence in with begin/end commands, and the use of a different Vulkan pipeline object with vertex and ray shaders instead of the regular raster shaders. We were able to reuse the existing “static mesh” geometry draw loop from the engine code, adapting it only to remove the frustum culling visibility checks, as we desired all geometry to be rendered into the ray tracing scene hierarchy." https://www.imgtec.com/blog/unreal-engine-and-the-ray-tracing-revelation/ PowerVR Wizard GPUs running the Apartment demo: https://www.youtube.com/watch?v=uxE2SYDHFtQ More than 5x faster than a 980 Ti in Blender: https://www.youtube.com/watch?v=ND96G9UZxxA&t=2m A 2080 Ti needs ~100x more power for the same raytracing performance: https://wccftech.com/first-nvidia-rtx-2080-ti-2080-dxr-raytracing-benchmark/
  13. In order to avoid misleading the audience LTT should release a new 8700K review without covert overclocking (like other reviewers already did) and mention in the video title of the first review, in an annotation and/or the video description that LTT is comparing an overclocked and potentially unstable 8700K (with ~10% performance boost) to non-overclocked CPUs. Otherwise LTT would let Asus/Intel harm its credibility and encourage even more cheating in the future. Also LTT should check clock speeds and BIOS settings next time to detect this kind of cheating in the future. "Our next set of tests focuses on power consumption and thermals, and we intended to use Blender for the benchmark – but it just wasn’t stable on the ASUS Z370 board with multi-core enhancement enabled. The voltage couldn’t sustain the all-core Turbo at 4.7GHz, despite our manual overclocks sustaining at 4.9GHz for these tests. That’s one immediate reason you might want to avoid this setting, or a reason that crashes could be caused without much explanation as to why. ... Enabling the 4.7GHz forced all-core Turbo pushes us to 145W, a substantial 42% increase in power consumption for our 8.9% increase in Cinebench performance." https://www.gamersnexus.net/guides/3077-explaining-coffee-lake-turbo-8700k-8600k
  14. JayzTwoCents already apologized and explained it in his follow up videos. If you enable XMP the BIOS asks for permission and warns you that "all core enhancement" needs "sufficient processor cooling". But at default there is no asking and no warning. So you are covertly overclocking and making your system unstable. This is unacceptable. "we intended to use Blender for the benchmark – but it just wasn’t stable on the ASUS Z370 board with multi-core enhancement enabled. ... Enabling the 4.7GHz forced all-core Turbo pushes us to 145W, a substantial 42% increase in power consumption for our 8.9% increase in Cinebench performance." https://www.gamersnexus.net/guides/3077-explaining-coffee-lake-turbo-8700k-8600k
  15. It is because LTT and some other reviewers have been fooled by default OC BIOS (~10% more performance with all core turbo and +1Ghz L3 cache) for reviewers. Only by overclocking a 8700K can reach >1500 in Cinebench. LTT should make that covert overclocking transparent (for example with YT annotations in your 8700K review and/or a new video) and not let Asus/Intel get away with covert OC reviews. Otherwise you let Asus/Intel harm your credibility and encourage even more cheating in the future. Also check clock speeds and BIOS settings next time to detect this kind of cheating in the future. https://www.pugetsystems.com/blog/2017/10/07/Why-Do-Hardware-Reviewers-Get-Different-Benchmark-Results-1058/
×