Jump to content

Xilefian

Member
  • Posts

    18
  • Joined

  • Last visited

Reputation Activity

  1. Informative
    Xilefian got a reaction from Vector0102 in NVIDIA GeForce 3070/3080/3080 Ti (Ampere): RTX Has No Perf Hit & x80 Ti Card 50% Faster in 4K! (Update 6 ~ Specs / Overview / Details)   
    I expect it's very difficult to find an actual answer to this via Google that actually has good information and not game benchmarks, so I'll write one here for anyone interested.
     
    For some background, the DirectX API is a programming library designed by Microsoft that lets programmers interact with GPUs without needing to write specific operation binary for each GPU (which is infeasible as even GPUs by the same company may have completely different architectures - in addition the GPU architecture itself may be a trade secret so manufacturers don't want you coding with its binary).
     
    Microsoft supplies a list of hardware features that a GPU needs to have to be awarded a specific DirectX compatibility version, and then Microsoft licenses the DirectX API to be implemented in the GPU driver. Hardware features could be "how many texture units are available" for effects like multi-texturing or "does this support floating point operations in programmable GPU shaders".
     
    Rivals to DirectX include OpenGL and Vulkan, which do not have a license but both have the "minimum hardware features to be awarded a specific version" scheme - they're also extensible so if a hardware features are available, but not enough to be awarded a version number, developers can still use that feature - as a result OpenGL historically would be about 2 or 3 years ahead of DirectX in terms of features. Vulkan is about 6 months ahead of DX12 (Vulkan is updated about every 1 or 2 weeks, DX12 is about once every 6 months).
     
    So in short, DX10 is basically a sign that says "these DirectX 10 GPUs all have this set of hardware features available" - DX9 introduced floating point shaders (which allows HDR rendering [note this isn't HDR display]) and DX10 introduced programmable geometry shaders (so the GPU can generate new triangles to be processed, previously the GPU would only draw triangles handed to it, it wouldn't generate new ones) and also introduced "feature levels" so GPUs without certain hardware features could still be DX10 compatible (DX11 expanded this to include some DX9 class hardware).
    For programmers, DirectX 10 removes some of the old and less-used code of DX9 to make things more streamlined and simplified, so writing a DX10 game is a little easier than a DX9 game - but that only really effects the start of a project, a large game by a major game studio isn't influenced by this factor. DX11 improved this direction even more.
     
    DX12 is a different beast to the DX APIs that came before it as it is Mantle derived, like Vulkan and Apple's Metal API, so DX12 is more about the programming side of things, than the hardware feature side of things.
  2. Informative
    Xilefian got a reaction from TheOnlyKirst in NVIDIA GeForce 3070/3080/3080 Ti (Ampere): RTX Has No Perf Hit & x80 Ti Card 50% Faster in 4K! (Update 6 ~ Specs / Overview / Details)   
    I expect it's very difficult to find an actual answer to this via Google that actually has good information and not game benchmarks, so I'll write one here for anyone interested.
     
    For some background, the DirectX API is a programming library designed by Microsoft that lets programmers interact with GPUs without needing to write specific operation binary for each GPU (which is infeasible as even GPUs by the same company may have completely different architectures - in addition the GPU architecture itself may be a trade secret so manufacturers don't want you coding with its binary).
     
    Microsoft supplies a list of hardware features that a GPU needs to have to be awarded a specific DirectX compatibility version, and then Microsoft licenses the DirectX API to be implemented in the GPU driver. Hardware features could be "how many texture units are available" for effects like multi-texturing or "does this support floating point operations in programmable GPU shaders".
     
    Rivals to DirectX include OpenGL and Vulkan, which do not have a license but both have the "minimum hardware features to be awarded a specific version" scheme - they're also extensible so if a hardware features are available, but not enough to be awarded a version number, developers can still use that feature - as a result OpenGL historically would be about 2 or 3 years ahead of DirectX in terms of features. Vulkan is about 6 months ahead of DX12 (Vulkan is updated about every 1 or 2 weeks, DX12 is about once every 6 months).
     
    So in short, DX10 is basically a sign that says "these DirectX 10 GPUs all have this set of hardware features available" - DX9 introduced floating point shaders (which allows HDR rendering [note this isn't HDR display]) and DX10 introduced programmable geometry shaders (so the GPU can generate new triangles to be processed, previously the GPU would only draw triangles handed to it, it wouldn't generate new ones) and also introduced "feature levels" so GPUs without certain hardware features could still be DX10 compatible (DX11 expanded this to include some DX9 class hardware).
    For programmers, DirectX 10 removes some of the old and less-used code of DX9 to make things more streamlined and simplified, so writing a DX10 game is a little easier than a DX9 game - but that only really effects the start of a project, a large game by a major game studio isn't influenced by this factor. DX11 improved this direction even more.
     
    DX12 is a different beast to the DX APIs that came before it as it is Mantle derived, like Vulkan and Apple's Metal API, so DX12 is more about the programming side of things, than the hardware feature side of things.
  3. Informative
    Xilefian got a reaction from realpetertdm in NVIDIA GeForce 3070/3080/3080 Ti (Ampere): RTX Has No Perf Hit & x80 Ti Card 50% Faster in 4K! (Update 6 ~ Specs / Overview / Details)   
    I expect it's very difficult to find an actual answer to this via Google that actually has good information and not game benchmarks, so I'll write one here for anyone interested.
     
    For some background, the DirectX API is a programming library designed by Microsoft that lets programmers interact with GPUs without needing to write specific operation binary for each GPU (which is infeasible as even GPUs by the same company may have completely different architectures - in addition the GPU architecture itself may be a trade secret so manufacturers don't want you coding with its binary).
     
    Microsoft supplies a list of hardware features that a GPU needs to have to be awarded a specific DirectX compatibility version, and then Microsoft licenses the DirectX API to be implemented in the GPU driver. Hardware features could be "how many texture units are available" for effects like multi-texturing or "does this support floating point operations in programmable GPU shaders".
     
    Rivals to DirectX include OpenGL and Vulkan, which do not have a license but both have the "minimum hardware features to be awarded a specific version" scheme - they're also extensible so if a hardware features are available, but not enough to be awarded a version number, developers can still use that feature - as a result OpenGL historically would be about 2 or 3 years ahead of DirectX in terms of features. Vulkan is about 6 months ahead of DX12 (Vulkan is updated about every 1 or 2 weeks, DX12 is about once every 6 months).
     
    So in short, DX10 is basically a sign that says "these DirectX 10 GPUs all have this set of hardware features available" - DX9 introduced floating point shaders (which allows HDR rendering [note this isn't HDR display]) and DX10 introduced programmable geometry shaders (so the GPU can generate new triangles to be processed, previously the GPU would only draw triangles handed to it, it wouldn't generate new ones) and also introduced "feature levels" so GPUs without certain hardware features could still be DX10 compatible (DX11 expanded this to include some DX9 class hardware).
    For programmers, DirectX 10 removes some of the old and less-used code of DX9 to make things more streamlined and simplified, so writing a DX10 game is a little easier than a DX9 game - but that only really effects the start of a project, a large game by a major game studio isn't influenced by this factor. DX11 improved this direction even more.
     
    DX12 is a different beast to the DX APIs that came before it as it is Mantle derived, like Vulkan and Apple's Metal API, so DX12 is more about the programming side of things, than the hardware feature side of things.
  4. Informative
    Xilefian got a reaction from Fnige in NVIDIA GeForce 3070/3080/3080 Ti (Ampere): RTX Has No Perf Hit & x80 Ti Card 50% Faster in 4K! (Update 6 ~ Specs / Overview / Details)   
    I expect it's very difficult to find an actual answer to this via Google that actually has good information and not game benchmarks, so I'll write one here for anyone interested.
     
    For some background, the DirectX API is a programming library designed by Microsoft that lets programmers interact with GPUs without needing to write specific operation binary for each GPU (which is infeasible as even GPUs by the same company may have completely different architectures - in addition the GPU architecture itself may be a trade secret so manufacturers don't want you coding with its binary).
     
    Microsoft supplies a list of hardware features that a GPU needs to have to be awarded a specific DirectX compatibility version, and then Microsoft licenses the DirectX API to be implemented in the GPU driver. Hardware features could be "how many texture units are available" for effects like multi-texturing or "does this support floating point operations in programmable GPU shaders".
     
    Rivals to DirectX include OpenGL and Vulkan, which do not have a license but both have the "minimum hardware features to be awarded a specific version" scheme - they're also extensible so if a hardware features are available, but not enough to be awarded a version number, developers can still use that feature - as a result OpenGL historically would be about 2 or 3 years ahead of DirectX in terms of features. Vulkan is about 6 months ahead of DX12 (Vulkan is updated about every 1 or 2 weeks, DX12 is about once every 6 months).
     
    So in short, DX10 is basically a sign that says "these DirectX 10 GPUs all have this set of hardware features available" - DX9 introduced floating point shaders (which allows HDR rendering [note this isn't HDR display]) and DX10 introduced programmable geometry shaders (so the GPU can generate new triangles to be processed, previously the GPU would only draw triangles handed to it, it wouldn't generate new ones) and also introduced "feature levels" so GPUs without certain hardware features could still be DX10 compatible (DX11 expanded this to include some DX9 class hardware).
    For programmers, DirectX 10 removes some of the old and less-used code of DX9 to make things more streamlined and simplified, so writing a DX10 game is a little easier than a DX9 game - but that only really effects the start of a project, a large game by a major game studio isn't influenced by this factor. DX11 improved this direction even more.
     
    DX12 is a different beast to the DX APIs that came before it as it is Mantle derived, like Vulkan and Apple's Metal API, so DX12 is more about the programming side of things, than the hardware feature side of things.
  5. Like
    Xilefian got a reaction from CodyT in Tom Scott on common VPN sponsorship claims   
    Louis Rossman also released a video today about VPN sponsorships, in it he discusses some of his concerns including the fact that no-one can really audit these VPN providers to make sure they're doing the right thing, and Nord VPN for one somewhat proved that.
     
    This touches on my concerns for VPNs, I remember a few years ago a YouTuber reached out to me to ask my opinion on VPNs as they had just started sponsor segments for a provider - yet had no knowledge about what VPNs actually did. I said my main concern is that you're piping all your data through someone else's network, you have to trust that the VPN is doing the right thing and there's no way to know if they actually are.
     
     
    I have a hard time trusting my ISP is doing the right thing when I pipe my data through them - and they have government watchdogs on their asses here in the UK - it's tougher for me to trust a VPN has everything covered.
     
    Geo IP switching is the one and only feature that tempts me to get a VPN, that sounds like a worthwhile tool if you need it.
  6. Like
    Xilefian reacted to kshade in Tom Scott on common VPN sponsorship claims   
    British educational youtuber Tom Scott has released a video about common claims made in VPN sponsorship segments.
     
     
    Video summary:
    You don't need a VPN to hide your password these days since SSL encryption is used almost everywhere. "Military-grade encryption" is what SSL uses as well. Not a wrong claim, but misleading. Your ISP can see what domain names you request, which is something you might want to hide with a VPN. But what they can't see is the whole URL. VPN providers can be compromised by hackers or governments as well. They are great for circumventing geo-blocking and piracy though, but you can't really advertise with that. Originally, this video was sponsored by a VPN provider, but they dropped it last second. tl;dr: VPNs are not a general necessity because of SSL
     
    I'm not posting this here as an attack on LTT or anything like that, and I'm aware that many of you will already know most of this. I'm just seeing a lot of channels with less of a tech-focused audience (and owners) do actual scare-mongering that it makes me glad that this easy to understand counterpoint exists.
  7. Agree
    Xilefian reacted to imreloadin in Apple should be EMBARRASSED - Pixel 4 Review   
    So what you're saying is that titles should be ignored as they're apparently not relevant to the content of the video?
    That's the only way I can wrap my head around a title for a review video going from "The competition should be embarrassed" to "They really cut a lot of corners here" if the content of the video itself hasn't changed...
     
    If the title was changed to not being so clickbaity then why didn't the new title have the same positive connotation to it? Why, instead, did the new title become a disparaging remark about the product? You would think that if the content of the review didn't change, as you claim, and the review of the product was positive that the title would have remained a positive reflection of the content. If the product wasn't reviewed positively then why give it such a positive spin in the title? To me that is pretty much flat out lying to get views...
  8. Agree
    Xilefian reacted to The_Prycer in Apple should be EMBARRASSED - Pixel 4 Review   
    If I may pontificate for a bit, for your edification, what you got called out on was disparaging Apple in your title+thumbnail over an objectively worse product. The clickbait is just part and parcel of being as successful a YouTuber as possible and I can't hold that against you since it's sort of how you employ people and feed your family. I'm a musician, I understand needing to get people through the door so they can spend money at the bar or on your merch just as much as you understand needing to get those clicks and holding our attention for at least the first 10 minutes of a video.
     
    Look, I get that LMG maybe isn't the most Apple friendly place on the net, but when you proclaim that "Apple should be EMBARRASSED" and then proceed to show me a device that really only competes with the iPhone in terms of price, then you aren't just click baiting me (We spent HOW MUCH on this mystery box?!) you are being dishonest with us as viewers and damaging your journalistic ethos.
  9. Agree
    Xilefian got a reaction from imreloadin in Apple should be EMBARRASSED - Pixel 4 Review   
    It "feels" like the title and thumbnail were changed to specifically align with what the rest of the reviewing community are saying. This sounds dramatic, but it's actually shaken my personal value in LTT's opinions because what it feels like as a viewer is that they specifically changed their thumbnail and title to step in-line with the opinions of everyone else.
     
    The reality is probably different, perhaps LTT are simply experimenting with their titles and thumbnails to see if the click-through rate can be improved after a video's posting - however as a viewer and subscriber of LTT the title and thumbnail change feels like an unwillingness to stick to an opinion.

    This probably says a lot about video psychology because I noted that the new thumbnail and title has changed the tone of the video (which in reality remained unchanged) to feel more critical of Google, whereas the original title and thumbnail seemed to make the video feel like it was praising Google. I believe there is some subtle psychology at work that the production team might not be aware of (or maybe they are and this was intentional).

    Still, it feels like LTT have made this change to tweak the tone of the video so it falls in line with what every else is saying and personally (I fully acknowledge that this is my personal feelings) it has shaken up just how much value I was willing to put into LTT's opinions (at least, with smartphones). I expect I will probably reassess how I feel about their smartphone reviews and treat them as a different category of review from now on.
     
    I'd be interested to hear if anyone else feels this way - I've specifically come back to the forum to make this post to find out.
  10. Agree
    Xilefian got a reaction from MandoPanda in MacBook Air 2018 – A PC Guy's Perspective   
  11. Agree
    Xilefian got a reaction from mrchow19910319 in MacBook Air 2018 – A PC Guy's Perspective   
  12. Like
    Xilefian got a reaction from raultherabbit in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  13. Informative
    Xilefian got a reaction from GooDBoY920 in Wolfenstein II Gets Turing Support   
    Practically zero benefit at all. This is more like applying very low quality video compression within the graphics pipeline itself (so blocky artefacts, just like in low-quality video). The idea is to only use it in areas where it would look as close as possible to a non-variable-rate-shaded frame, so no visual quality difference should be perceivable, but that depends on the implementation (blocky artefacts are indeed very possible).
     
    The way video encoding works is the pixels themselves are analysed and processed - a video encoder has zero knowledge about where those pixels come from or how they were shaded by some random game or anything like that at all. It would be handy to be able to provide an encoder as much information as possible, but no such mechanism exists and frankly I doubt one would ever exist as video encoding is a general computer science problem rather than a 3D graphics rendering problem. It's almost entirely solved with hardware improvements rather than adding external hinting to the encoder.
  14. Informative
    Xilefian got a reaction from vanished in Wolfenstein II Gets Turing Support   
    Not quite correct, however your description would be satisfactory for a certain PlayStation 2 developer in the way they attempted to describe Multi Sampled Anti Aliasing (MSAA).
     
    If you've got some technical knowledge you can read the GL spec to see how this works. I'll explain it in brief here. This takes advantage of the same hardware that MSAA uses, multi-sampled buffer outputs, but rather than writing a single shader invocation to multiple buffers based on geometry coverage the single shader invocation is written to multiple pixels in a tiled area of the frame-buffer, and the size of that tiled area is controlled by the developer. It is defined by the "shading rate image" which is a texture array of unsigned 8 bit values that describe up to a 4x4 rectangle for the tiles (can be any size from 1x1 to 4x4, so 1x4 or 4x1 is acceptable).
     
    The concept of a fragment (what we traditionally see as a single "pixel") now becomes up to 4x4 pixels, I'm guessing 4x4 is used because MSAA 16x is supported on all the hardware that will support NVIDIA's implementation of variable rate shading. They have specifically left a note in the spec stating that they have left room to define more tile sizes in the future.
     
    So in super, super basic terms: This is MSAA but applied to various sized tiles on the frame buffer (so no coverage and no multi sample buffers).
     
    MSAA can be thought of as a different resolution, after all 4x MSAA at 1080p has almost identical memory requirements to 4K and roughly the same geometry coverage as 4K.
     
    Your comment about eye tracking is called "Foveated Rendering" https://en.wikipedia.org/wiki/Foveated_rendering it works best in VR where we can guarantee that only 1 person is viewing the display with their eyes a fixed distance from it. The spec for variable rate shading specifically mentions how it can be very beneficial for VR applications, which would be Foveated Rendering, however the original idea for Foveated Rendering involved warping the geometry to dedicate more fragments where the eye is focusing on and less around the edges (and then un-warp the result when presenting to the VR display), it's slightly different to what's happening in variable rate shading - where it's not warping the geometry or the frame buffer, it's dropping the quality of the shading by applying shaded fragments to a tiled area.
     
    It's weird, we think of fragments as being pixels, but in this case a "fragment" is multiple pixels - because that's how we describe MSAA so it's not incorrect.
     
    What's pretty cool is that the variable rate tiles are defined in GPU memory with that R8UI texture array, so a GPU shader can calculate the shading rate. Wolfenstein II is a very good choice of game to do this with as it uses sparse textures that require an analytical pre-pass; the variable rate shading can be added to this pre-pass stage so it should perform surprisingly amazing on Wolfenstein II. It will trade a small bit of extra GPU compute time at the start of the frame and a small bit of GPU memory for the tile definition to save on the more expensive GPU shading rate (which is indeed the next bottleneck we see in graphics rendering, so this is all very good news).
     
    I don't see why Intel and AMD cannot implement a solution like this also, I would imagine (and I'm not a GPU engineer) that it would be almost identical to their MSAA solutions. I can't imagine this working very well for a tiled renderer (mobile GPUs), although last time I spoke to NVIDIA and AMD about this a few years ago they claimed their hardware is essentially tiled-based with how it treats the cache so maybe that's a non-issue.
     
    EDIT: I've written a lot here, sorry. I want to add that Wolfenstein II already does a really, really good job at reducing the amount of shading that happens in a frame, so variable rate shading would not have as much of a benefit to this game compared to something like a traditional deferred renderer (a basic Unreal Engine game).
     
    EDIT2: I can also imagine this becoming a standard for VK/GL - it can be renamed to something else as "shading rate image" feels like a description of what this is used for, rather than what this is doing. I can imagine it being renamed to something like "Image Tiled Fragment" and the spec being more generic in something like "upload a tex image array that describes the tile sizes for fragment outputs across an image" - referencing NV_shading_rate_image. A more generic spec would probably lift the limit of 4x4 tiles and have a minimum requirement of 4x4 and the actual maximum tile size can be queried. I would also expect a requirement would be to make this MSAA compatible also, which would be freaking insane (that's like, 2D MSAA? Crazy).
    Unrelated extra: This is also a step towards the idea of variable rate coverage masks, an idea I proposed that redefined MSAA coverage to be across the image, rather than multiplied over buffers (so 1.5x MSAA would be possible).
  15. Informative
    Xilefian got a reaction from T02MY in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  16. Informative
    Xilefian got a reaction from Syfes in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  17. Informative
    Xilefian got a reaction from matrix07012 in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  18. Informative
    Xilefian got a reaction from mrthuvi in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  19. Informative
    Xilefian got a reaction from Bouzoo in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  20. Agree
    Xilefian got a reaction from Curufinwe_wins in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  21. Informative
    Xilefian got a reaction from Blademaster91 in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  22. Informative
    Xilefian got a reaction from leadeater in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  23. Agree
    Xilefian got a reaction from illegalwater in Wolfenstein II Gets Turing Support   
    This isn't how the graphics industry works, as much as consumers like to believe as such.
     
    NVIDIA implements these kinds of effects as Vulkan and OpenGL extensions, which are a public API specification that any IHV is free to implement.
    NV_shading_rate_image
    VK_NV_shading_rate_image
     
    What usually happens when AMD, Intel or NVIDIA creates extensions like this is that they publish the spec to Khronos for everyone to look at and read, then the other IHVs will go ahead and create their own version either based on the specification or with their own platform improvements in mind. If the specification is popular then vendors tend to work together to have it revised into an "EXT" extension, which is basically a version of the spec that is more vendor agnostic (even if it's just a cosmetic name change).
     
    If the extension is really popular and is something lots of GPUs support, then it can be revised into an "ARB" extension, which is the Khronos architecture review board. If it's a feature that becomes pedestrian in graphics rendering, then it stops being an extension and gets promoted into the full OpenGL/Vulkan specification and IHVs are required to implement it to be certified for the latest Vulkan/GL version.
     
    During all this, AMD, Intel, NVIDIA (ARM, Apple, Google, Samsung, etc, etc) are all free to implement each other's vendor specific extensions. As an example, NVIDIA GPUs and AMD GPUs both support a handful of vendor extensions from each other.
     
    Take a look at these non-NVIDIA extensions that NVIDIA has implemented on a GTX 980:

     
    Siggraph is an event that's all about the graphics industry moving forward, so yes this is all about technological progress and that's especially obvious when it's published as a public Vulkan/OpenGL extension specification that anyone is welcome to implement or copy. Proprietary stuff does not get published (goes against the "Open" of "OpenGL").
  24. Informative
    Xilefian got a reaction from ScratchCat in Bloomberg says super micro servers sold to Apple and Amazon contain spy chips, all three companies (and department of Homeland security) deny this   
    The article suggests that the attacked unit is the Baseboard Management Controller, which allows monitoring the server hardware - with remote access capabilities.
     
    However, BMC over network uses UDP/IP - being an Internet Protocol it's going to be a standard thing to get blocked or detected by a network firewall. If these servers were sending data remotely to external IP addresses then it would have been detected almost immediately and the hardware would never have been put in use. Even when installed, it would likely get flagged by the network firewall.
     
    Unless BMC has some kind of secret remote access protocol that doesn't use the internet (and for some reason these servers were connected to an outside network that forwards non-internet-protocols to the internet - which is very unlikely, these protocols get killed when in-flight) I doubt this would have gone under the radar at all - if it is even true.
     
    BMC's remote protocol is even one that Wireshark supports - so it would have been very easy to detect. Evidence for at least this can be easily retrieved.
  25. Informative
    Xilefian got a reaction from JCHelios in Is NVLink BETTER than SLI??   
    The "master" GPU doesn't direct any workload, all the commands come from the graphics driver: SLI bridge is used for timing and chunky memory transfers.
     
    4:20 resource pooling has always been here with SLI, it just very rarely actually got used because the effort wasn't worth it for the fraction of people with SLI rigs - and it was slow as heck going over the PCI lanes or SLI bridge if you were lucky.
     
    There's way more cool stuff with SLI that's possible, it very rarely gets discussed or mentioned, so I'm going to write out some of the technical awesomeness of SLI here (I've developed some stuff that specifically takes advantage of multi-GPU setups).
     
    ---
     
    With SLI (and CrossFire, I'll be using SLI from here on though) the GPU driver duplicates every command to each of the GPUs in SLI for a single SLI rendering context. So if you have two 4GiB GPUs and make a 1GiB texture allocation then that's 1GiB on GPU0 and 1GiB on GPU2. Fine and dandy, everyone should be on the same page with this because that sounds like the familiar myth "SLI memory does not stack".
     
    So things become a bit weird when you've got more than just an SLI rendering context going on. What if you have a PhysX context on GPU1? What if you have two compute contexts, one on GPU0 and another on GPU1? What if you have a second rendering context on GPU0? Suddenly you've got 3GiB allocated on one card, 2GiB on another. The SLI context is duplicating commands, but the other contexts are doing their own thing.
     
    This is why games like GTA V in their options menu will tell you that you have 8GiB of GPU memory for a 2x 4GiB SLI setup and simply doubles the amount of memory used for each option - you do actually have 8GiB of memory and the allocations are doubled between the two GPUs, however out of that memory what if some is used by some other context in another process or maybe a compute job or PhysX job? Then more memory will be used on one GPU versus the other.
     
    A better visual representation these games could do is show individual memory used for each GPU, that would better explain what's technically happening.
     
    Now everything I just described is a boring, default SLI rendering context, but there's more to SLI than just this.
     
    ---
     
    If you have a Quadro GPU from anytime after the year 2005 and you're running Microsoft Windows then lucky you - you've got access to the Windows OpenGL extension NV_gpu_affinity.
    You can now produce an application or game that can specifically send commands to each individual GPU, including allocating different amounts of memory and rendering different scenes. You can allocate 4GiB on GPU0 and then allocate 3GiB and render shadow maps or something on GPU1 (and then rather slowly move all that memory back over the SLI bridge or PCI lane).
     
    What if you don't have a Quadro? Don't worry, since NVIDIA driver version 361 you've had access to the OpenGL extension NVX_linked_gpu_multicast. Now games can enjoy extremely efficient multi-GPU rendering by deciding which GPUs should receive what commands. You can have GPU0 render a completely different scene to GPU1, but a more common scenario for this would be to have GPU0 render the camera "left eye" and GPU1 render the camera "right eye"; moving the results over the SLI bridge or PCI lane back to GPU0 for resolving. Handy for stereo 3D or VR.
     
    You can access individual GPUs with DirectX this way too by going through NV_DX_interop2. With doing this games can have extremely fine control of how they do SLI. Doesn't have to be just AFR or SFR.
     
    With D3D12 and Vulkan we got multi-GPU where we can just straight up access any old GPU we want (AMD, NV, Intel, whatever). All the memory is there, can render or compute whatever we want. Or we can be boring and ask for "AFR/SFR please".
     
    ---
     
    NVLink just makes all this a million times better. With the modern explicit graphics APIs I expect we'll be able to do direct GPU to GPU memory transfers over the NVLink (pretty sure Vulkan already has this capability defined - just at the moment it uses the slow PCI lane or *maybe* the SLI bridge).
     
    I wish multi-GPU stuff came back. It's one of my favourite complexities about graphics programming and it is perfect for rendering VR stereo scenes.
     
     
     
×